Jan 27 16:57:03 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 16:57:03 crc restorecon[4747]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:03 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 16:57:04 crc restorecon[4747]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 16:57:05 crc kubenswrapper[5049]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 16:57:05 crc kubenswrapper[5049]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 16:57:05 crc kubenswrapper[5049]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 16:57:05 crc kubenswrapper[5049]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 16:57:05 crc kubenswrapper[5049]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 16:57:05 crc kubenswrapper[5049]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.370177 5049 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381116 5049 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381151 5049 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381161 5049 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381170 5049 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381181 5049 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381192 5049 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381201 5049 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381209 5049 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381218 5049 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381226 5049 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381234 5049 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381242 5049 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381249 5049 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381257 5049 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381266 5049 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381273 5049 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381281 5049 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381288 5049 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381296 5049 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381307 5049 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381318 5049 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381327 5049 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381336 5049 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381344 5049 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381353 5049 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381362 5049 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381370 5049 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381387 5049 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381396 5049 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381405 5049 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381414 5049 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381422 5049 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381429 5049 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381437 5049 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381445 5049 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381453 5049 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381463 5049 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381473 5049 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381482 5049 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381492 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381501 5049 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381512 5049 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381520 5049 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381529 5049 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381537 5049 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381546 5049 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381554 5049 feature_gate.go:330] unrecognized feature gate: Example Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381561 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381569 5049 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381577 5049 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381584 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381592 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381600 5049 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381607 5049 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381617 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381624 5049 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381635 5049 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381664 5049 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381698 5049 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381706 5049 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381714 5049 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381722 5049 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381729 5049 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381737 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381745 5049 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381753 5049 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381765 5049 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381781 5049 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381794 5049 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381804 5049 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.381817 5049 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382799 5049 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382826 5049 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382842 5049 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382854 5049 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382866 5049 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382876 5049 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382889 5049 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382910 5049 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382921 5049 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382930 5049 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382940 5049 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382950 5049 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382959 5049 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382968 5049 flags.go:64] FLAG: --cgroup-root="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382977 5049 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382987 5049 flags.go:64] FLAG: --client-ca-file="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.382996 5049 flags.go:64] FLAG: --cloud-config="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383005 5049 flags.go:64] FLAG: --cloud-provider="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383014 5049 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383026 5049 flags.go:64] FLAG: --cluster-domain="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383035 5049 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383045 5049 flags.go:64] FLAG: --config-dir="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383054 5049 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383064 5049 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383077 5049 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383087 5049 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383097 5049 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383109 5049 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383118 5049 flags.go:64] FLAG: --contention-profiling="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383127 5049 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383136 5049 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383146 5049 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383155 5049 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383167 5049 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383176 5049 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383185 5049 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383194 5049 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383204 5049 flags.go:64] FLAG: --enable-server="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383213 5049 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383225 5049 flags.go:64] FLAG: --event-burst="100" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383234 5049 flags.go:64] FLAG: --event-qps="50" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383243 5049 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383253 5049 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383263 5049 flags.go:64] FLAG: --eviction-hard="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383274 5049 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383283 5049 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383292 5049 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383302 5049 flags.go:64] FLAG: --eviction-soft="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383312 5049 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383320 5049 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383330 5049 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383340 5049 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383349 5049 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383358 5049 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383366 5049 flags.go:64] FLAG: --feature-gates="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383378 5049 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383387 5049 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383396 5049 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383406 5049 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383415 5049 flags.go:64] FLAG: --healthz-port="10248" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383424 5049 flags.go:64] FLAG: --help="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383433 5049 flags.go:64] FLAG: --hostname-override="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383442 5049 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383452 5049 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383461 5049 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383469 5049 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383478 5049 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383487 5049 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383496 5049 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383505 5049 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383514 5049 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383523 5049 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383533 5049 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383541 5049 flags.go:64] FLAG: --kube-reserved="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383551 5049 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383561 5049 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383570 5049 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383579 5049 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383588 5049 flags.go:64] FLAG: --lock-file="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383598 5049 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383608 5049 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383617 5049 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383632 5049 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383641 5049 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383651 5049 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383660 5049 flags.go:64] FLAG: --logging-format="text" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383669 5049 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383708 5049 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383717 5049 flags.go:64] FLAG: --manifest-url="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383726 5049 flags.go:64] FLAG: --manifest-url-header="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383739 5049 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383748 5049 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383760 5049 flags.go:64] FLAG: --max-pods="110" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383771 5049 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383783 5049 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383795 5049 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383806 5049 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383818 5049 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383830 5049 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383840 5049 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383860 5049 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383870 5049 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383879 5049 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383889 5049 flags.go:64] FLAG: --pod-cidr="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383898 5049 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383912 5049 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383921 5049 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383931 5049 flags.go:64] FLAG: --pods-per-core="0" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383940 5049 flags.go:64] FLAG: --port="10250" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383949 5049 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383958 5049 flags.go:64] FLAG: --provider-id="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383967 5049 flags.go:64] FLAG: --qos-reserved="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383976 5049 flags.go:64] FLAG: --read-only-port="10255" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383985 5049 flags.go:64] FLAG: --register-node="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.383994 5049 flags.go:64] FLAG: --register-schedulable="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384006 5049 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384023 5049 flags.go:64] FLAG: --registry-burst="10" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384032 5049 flags.go:64] FLAG: --registry-qps="5" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384041 5049 flags.go:64] FLAG: --reserved-cpus="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384050 5049 flags.go:64] FLAG: --reserved-memory="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384062 5049 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384071 5049 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384080 5049 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384090 5049 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384099 5049 flags.go:64] FLAG: --runonce="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384109 5049 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384119 5049 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384129 5049 flags.go:64] FLAG: --seccomp-default="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384140 5049 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384150 5049 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384161 5049 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384171 5049 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384180 5049 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384190 5049 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384199 5049 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384208 5049 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384218 5049 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384227 5049 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384236 5049 flags.go:64] FLAG: --system-cgroups="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384245 5049 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384259 5049 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384268 5049 flags.go:64] FLAG: --tls-cert-file="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384276 5049 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384288 5049 flags.go:64] FLAG: --tls-min-version="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384297 5049 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384306 5049 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384316 5049 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384325 5049 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384334 5049 flags.go:64] FLAG: --v="2" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384347 5049 flags.go:64] FLAG: --version="false" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384359 5049 flags.go:64] FLAG: --vmodule="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384372 5049 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.384382 5049 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384588 5049 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384599 5049 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384608 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384618 5049 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384626 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384635 5049 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384646 5049 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384656 5049 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384665 5049 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384735 5049 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384744 5049 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384752 5049 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384760 5049 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384769 5049 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384780 5049 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384789 5049 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384799 5049 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384809 5049 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384819 5049 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384828 5049 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384835 5049 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384843 5049 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384851 5049 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384859 5049 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384868 5049 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384877 5049 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384885 5049 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384893 5049 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384900 5049 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384908 5049 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384916 5049 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384923 5049 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384931 5049 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384939 5049 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384948 5049 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384956 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384965 5049 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384973 5049 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384980 5049 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384988 5049 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.384996 5049 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385007 5049 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385017 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385026 5049 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385035 5049 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385043 5049 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385051 5049 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385060 5049 feature_gate.go:330] unrecognized feature gate: Example Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385070 5049 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385080 5049 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385090 5049 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385100 5049 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385109 5049 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385117 5049 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385125 5049 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385132 5049 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385141 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385149 5049 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385156 5049 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385164 5049 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385172 5049 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385179 5049 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385187 5049 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385195 5049 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385202 5049 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385210 5049 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385218 5049 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385225 5049 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385233 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385241 5049 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.385253 5049 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.385278 5049 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.396243 5049 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.396286 5049 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396382 5049 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396391 5049 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396397 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396403 5049 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396409 5049 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396414 5049 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396420 5049 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396426 5049 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396433 5049 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396439 5049 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396445 5049 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396451 5049 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396457 5049 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396464 5049 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396470 5049 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396476 5049 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396482 5049 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396488 5049 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396495 5049 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396501 5049 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396507 5049 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396513 5049 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396518 5049 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396522 5049 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396527 5049 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396534 5049 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396544 5049 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396549 5049 feature_gate.go:330] unrecognized feature gate: Example Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396555 5049 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396562 5049 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396568 5049 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396577 5049 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396582 5049 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396587 5049 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396594 5049 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396599 5049 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396604 5049 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396610 5049 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396614 5049 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396619 5049 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396624 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396630 5049 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396636 5049 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396641 5049 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396646 5049 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396650 5049 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396655 5049 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396660 5049 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396665 5049 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396718 5049 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396723 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396728 5049 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396733 5049 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396738 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396743 5049 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396747 5049 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396752 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396757 5049 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396762 5049 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396767 5049 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396772 5049 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396777 5049 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396781 5049 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396789 5049 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396794 5049 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396799 5049 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396804 5049 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396808 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396813 5049 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396818 5049 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.396824 5049 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.396834 5049 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397006 5049 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397020 5049 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397028 5049 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397035 5049 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397041 5049 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397047 5049 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397053 5049 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397059 5049 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397065 5049 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397073 5049 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397081 5049 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397088 5049 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397095 5049 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397102 5049 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397108 5049 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397113 5049 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397118 5049 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397123 5049 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397129 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397135 5049 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397141 5049 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397147 5049 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397153 5049 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397159 5049 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397166 5049 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397173 5049 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397180 5049 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397185 5049 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397190 5049 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397195 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397202 5049 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397208 5049 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397215 5049 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397222 5049 feature_gate.go:330] unrecognized feature gate: Example Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397229 5049 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397235 5049 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397241 5049 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397247 5049 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397253 5049 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397259 5049 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397265 5049 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397272 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397278 5049 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397284 5049 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397290 5049 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397296 5049 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397302 5049 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397308 5049 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397314 5049 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397320 5049 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397328 5049 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397334 5049 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397341 5049 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397347 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397352 5049 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397358 5049 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397365 5049 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397371 5049 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397376 5049 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397381 5049 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397386 5049 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397391 5049 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397396 5049 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397401 5049 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397406 5049 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397411 5049 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397416 5049 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397420 5049 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397425 5049 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397430 5049 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.397436 5049 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.397445 5049 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.398785 5049 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.404118 5049 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.404251 5049 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.405888 5049 server.go:997] "Starting client certificate rotation" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.405914 5049 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.406128 5049 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-13 23:40:21.156648959 +0000 UTC Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.406639 5049 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.435695 5049 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.440401 5049 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.442125 5049 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.461101 5049 log.go:25] "Validated CRI v1 runtime API" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.496832 5049 log.go:25] "Validated CRI v1 image API" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.498962 5049 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.505344 5049 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-16-52-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.505399 5049 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.533071 5049 manager.go:217] Machine: {Timestamp:2026-01-27 16:57:05.527395094 +0000 UTC m=+0.626368703 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e5f883ea-bc60-48f3-8792-0d2ec56b48dc BootID:52a9b7e1-dcbf-429a-a612-98ea421b6253 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:48:90:1e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:48:90:1e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:57:3b:dd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:85:c5:6a Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6c:06:0c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fa:c7:37 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:44:25:c0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2e:55:e6:4b:9e:17 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:da:1c:1d:fe:2f:6d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.533456 5049 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.533650 5049 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.535395 5049 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.535796 5049 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.535858 5049 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.536241 5049 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.536261 5049 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.536889 5049 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.536925 5049 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.537852 5049 state_mem.go:36] "Initialized new in-memory state store" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.538001 5049 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.544987 5049 kubelet.go:418] "Attempting to sync node with API server" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.545024 5049 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.545072 5049 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.545094 5049 kubelet.go:324] "Adding apiserver pod source" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.545112 5049 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.549905 5049 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.550991 5049 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.554081 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.554100 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.554212 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.554269 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.554375 5049 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556163 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556196 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556206 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556216 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556231 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556243 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556254 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556270 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556282 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556293 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556307 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.556317 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.557508 5049 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.558108 5049 server.go:1280] "Started kubelet" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.558294 5049 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.559449 5049 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.559470 5049 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 16:57:05 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.560450 5049 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.562639 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.562736 5049 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.562778 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:03:21.417594057 +0000 UTC Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.566528 5049 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.566565 5049 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.566804 5049 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.566963 5049 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.573446 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="200ms" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.574173 5049 factory.go:153] Registering CRI-O factory Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.574208 5049 factory.go:221] Registration of the crio container factory successfully Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.574147 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.574307 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.574438 5049 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.574457 5049 factory.go:55] Registering systemd factory Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.574469 5049 factory.go:221] Registration of the systemd container factory successfully Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.574495 5049 factory.go:103] Registering Raw factory Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.574516 5049 manager.go:1196] Started watching for new ooms in manager Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.574878 5049 server.go:460] "Adding debug handlers to kubelet server" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.573921 5049 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ea4e94eb6059e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 16:57:05.558070686 +0000 UTC m=+0.657044245,LastTimestamp:2026-01-27 16:57:05.558070686 +0000 UTC m=+0.657044245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.575273 5049 manager.go:319] Starting recovery of all containers Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.584609 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585028 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585043 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585075 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585087 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585100 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585115 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585127 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585145 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585158 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585171 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585183 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585195 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585211 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585226 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585240 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585252 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585264 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585277 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585291 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585305 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585319 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585331 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585344 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585357 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585370 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585389 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585402 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585415 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585430 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585443 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585457 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585470 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585484 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585496 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585510 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585523 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585537 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585553 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585566 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585580 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585598 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585613 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585627 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585640 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585654 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585684 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585700 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585714 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585731 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585746 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585760 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585779 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585794 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585809 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585825 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585840 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585854 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585869 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585883 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585898 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585911 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585925 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585939 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585953 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585967 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585981 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.585994 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586008 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586022 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586035 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586048 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586063 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586077 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586089 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586103 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586116 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586129 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586143 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586157 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586169 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586182 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586195 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586207 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586219 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586234 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586247 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586260 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586273 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586287 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586302 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586316 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586332 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586347 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586361 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586375 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586387 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586399 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586412 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586424 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586437 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586450 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586463 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586475 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586495 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586509 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586525 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586540 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586554 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586569 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586583 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586598 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586612 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586626 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586642 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586655 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586776 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586791 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586805 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586820 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586834 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586847 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586860 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586875 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586888 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586901 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586914 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586927 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586940 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586952 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586966 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586979 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.586992 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587004 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587017 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587030 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587042 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587053 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587066 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587078 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587096 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587108 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587122 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587135 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587147 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587159 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.587172 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.588942 5049 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.588971 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.588987 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589002 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589017 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589030 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589044 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589060 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589074 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589088 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589103 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589117 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589133 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589146 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589160 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589174 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589190 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589203 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589216 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589230 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589243 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589257 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589271 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589284 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589297 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589312 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589327 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589342 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589357 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589370 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589385 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589399 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589427 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589440 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589456 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589470 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589483 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589496 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589510 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589524 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589536 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589552 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589568 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589583 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589599 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589613 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589627 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589641 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589655 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589731 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589746 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589761 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589776 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589791 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589804 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589818 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589831 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589845 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589859 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589875 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589895 5049 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589910 5049 reconstruct.go:97] "Volume reconstruction finished" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.589920 5049 reconciler.go:26] "Reconciler: start to sync state" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.599763 5049 manager.go:324] Recovery completed Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.619802 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.621702 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.621754 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.621767 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.623241 5049 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.623268 5049 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.623301 5049 state_mem.go:36] "Initialized new in-memory state store" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.642044 5049 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.644591 5049 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.644667 5049 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.644750 5049 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.644854 5049 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.645059 5049 policy_none.go:49] "None policy: Start" Jan 27 16:57:05 crc kubenswrapper[5049]: W0127 16:57:05.646014 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.646264 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.647038 5049 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.647085 5049 state_mem.go:35] "Initializing new in-memory state store" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.667984 5049 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.706092 5049 manager.go:334] "Starting Device Plugin manager" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.706451 5049 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.706476 5049 server.go:79] "Starting device plugin registration server" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.706989 5049 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.707018 5049 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.707238 5049 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.707351 5049 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.707366 5049 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.713534 5049 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.745848 5049 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.746006 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.748215 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.748268 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.748286 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.748486 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.748878 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.748941 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.749994 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750011 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750029 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750038 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750044 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750059 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750173 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750509 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750635 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750858 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750893 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.750906 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.751056 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.751197 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.751235 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.751993 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752019 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752035 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752157 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752205 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752226 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752417 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752545 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752845 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.752997 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.753051 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.753912 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.753952 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.753968 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.754206 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.754244 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.755112 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.755147 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.755153 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.755172 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.755177 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.755193 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.774382 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="400ms" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.793841 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.793893 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.793964 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794009 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794062 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794108 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794200 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794314 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794433 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794475 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794519 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794562 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794615 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794659 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.794749 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.807329 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.808966 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.809057 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.809087 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.809175 5049 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 16:57:05 crc kubenswrapper[5049]: E0127 16:57:05.810176 5049 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896625 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896744 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896779 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896811 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896837 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896859 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896881 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896901 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896925 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896948 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896946 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897023 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897079 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897074 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897105 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897124 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.896973 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897131 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897162 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897169 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897196 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897199 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897210 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897186 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897219 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897218 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897253 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897232 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897345 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:05 crc kubenswrapper[5049]: I0127 16:57:05.897586 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.011190 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.012949 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.012991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.013007 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.013043 5049 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 16:57:06 crc kubenswrapper[5049]: E0127 16:57:06.013485 5049 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.074883 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.081306 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.110520 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.119517 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.123408 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 16:57:06 crc kubenswrapper[5049]: W0127 16:57:06.138802 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-19c118fdc9c21392da417b286c26482b6331f98a65ece806f6985dc6348bd06c WatchSource:0}: Error finding container 19c118fdc9c21392da417b286c26482b6331f98a65ece806f6985dc6348bd06c: Status 404 returned error can't find the container with id 19c118fdc9c21392da417b286c26482b6331f98a65ece806f6985dc6348bd06c Jan 27 16:57:06 crc kubenswrapper[5049]: W0127 16:57:06.140099 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c2ceb5276d3835f6cc507794c05e88d44c0c3283883b953ed3027ecc8f117109 WatchSource:0}: Error finding container c2ceb5276d3835f6cc507794c05e88d44c0c3283883b953ed3027ecc8f117109: Status 404 returned error can't find the container with id c2ceb5276d3835f6cc507794c05e88d44c0c3283883b953ed3027ecc8f117109 Jan 27 16:57:06 crc kubenswrapper[5049]: W0127 16:57:06.154333 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a0ac95d223d5e3a04fb3254533488043292c39ba805d4fc626c8d4f7f2888caf WatchSource:0}: Error finding container a0ac95d223d5e3a04fb3254533488043292c39ba805d4fc626c8d4f7f2888caf: Status 404 returned error can't find the container with id a0ac95d223d5e3a04fb3254533488043292c39ba805d4fc626c8d4f7f2888caf Jan 27 16:57:06 crc kubenswrapper[5049]: W0127 16:57:06.157997 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5c4847b4cb0ac4e31154a58c286fa2bba3e0ccb4088562ac2cfe33b6f6eb1339 WatchSource:0}: Error finding container 5c4847b4cb0ac4e31154a58c286fa2bba3e0ccb4088562ac2cfe33b6f6eb1339: Status 404 returned error can't find the container with id 5c4847b4cb0ac4e31154a58c286fa2bba3e0ccb4088562ac2cfe33b6f6eb1339 Jan 27 16:57:06 crc kubenswrapper[5049]: W0127 16:57:06.165787 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4df0b30a7ae6f6d29d8547496f47f47de6cb8eb97151222fc6ef348b2e6450fb WatchSource:0}: Error finding container 4df0b30a7ae6f6d29d8547496f47f47de6cb8eb97151222fc6ef348b2e6450fb: Status 404 returned error can't find the container with id 4df0b30a7ae6f6d29d8547496f47f47de6cb8eb97151222fc6ef348b2e6450fb Jan 27 16:57:06 crc kubenswrapper[5049]: E0127 16:57:06.175291 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="800ms" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.413808 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.415729 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.415770 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.415782 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.415811 5049 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 16:57:06 crc kubenswrapper[5049]: E0127 16:57:06.416460 5049 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 27 16:57:06 crc kubenswrapper[5049]: W0127 16:57:06.481842 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:06 crc kubenswrapper[5049]: E0127 16:57:06.481963 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.559512 5049 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.563493 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 00:07:54.249061702 +0000 UTC Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.652342 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4df0b30a7ae6f6d29d8547496f47f47de6cb8eb97151222fc6ef348b2e6450fb"} Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.653658 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a0ac95d223d5e3a04fb3254533488043292c39ba805d4fc626c8d4f7f2888caf"} Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.655040 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c4847b4cb0ac4e31154a58c286fa2bba3e0ccb4088562ac2cfe33b6f6eb1339"} Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.657373 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c2ceb5276d3835f6cc507794c05e88d44c0c3283883b953ed3027ecc8f117109"} Jan 27 16:57:06 crc kubenswrapper[5049]: I0127 16:57:06.658721 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"19c118fdc9c21392da417b286c26482b6331f98a65ece806f6985dc6348bd06c"} Jan 27 16:57:06 crc kubenswrapper[5049]: E0127 16:57:06.900350 5049 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ea4e94eb6059e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 16:57:05.558070686 +0000 UTC m=+0.657044245,LastTimestamp:2026-01-27 16:57:05.558070686 +0000 UTC m=+0.657044245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 16:57:06 crc kubenswrapper[5049]: E0127 16:57:06.976518 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="1.6s" Jan 27 16:57:07 crc kubenswrapper[5049]: W0127 16:57:07.051329 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:07 crc kubenswrapper[5049]: E0127 16:57:07.051424 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:07 crc kubenswrapper[5049]: W0127 16:57:07.150107 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:07 crc kubenswrapper[5049]: E0127 16:57:07.150228 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:07 crc kubenswrapper[5049]: W0127 16:57:07.183528 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:07 crc kubenswrapper[5049]: E0127 16:57:07.183655 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.216635 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.219028 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.219103 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.219125 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.219166 5049 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 16:57:07 crc kubenswrapper[5049]: E0127 16:57:07.219843 5049 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.559696 5049 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.563650 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:21:39.266209213 +0000 UTC Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.577219 5049 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 16:57:07 crc kubenswrapper[5049]: E0127 16:57:07.578717 5049 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.667664 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a"} Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.667748 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3"} Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.667760 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c"} Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.667772 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9"} Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.667769 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.669042 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.669073 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.669088 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.670560 5049 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622" exitCode=0 Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.670710 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622"} Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.670783 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.672930 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.673082 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.673487 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.673848 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50" exitCode=0 Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.673939 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50"} Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.674027 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.675248 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.675273 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.675285 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.677147 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.678003 5049 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e" exitCode=0 Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.678111 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.678107 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e"} Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.679767 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.679873 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.679953 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.680319 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.680364 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.680380 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.682053 5049 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8" exitCode=0 Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.682101 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8"} Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.682360 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.683447 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.683573 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:07 crc kubenswrapper[5049]: I0127 16:57:07.683656 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.076475 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.479923 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.559870 5049 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.564822 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:52:22.631820767 +0000 UTC Jan 27 16:57:08 crc kubenswrapper[5049]: E0127 16:57:08.577408 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="3.2s" Jan 27 16:57:08 crc kubenswrapper[5049]: W0127 16:57:08.666011 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Jan 27 16:57:08 crc kubenswrapper[5049]: E0127 16:57:08.666127 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.686913 5049 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab" exitCode=0 Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.687068 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.687095 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.688027 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.688067 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.688079 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.688888 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.688975 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.690069 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.690091 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.690100 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.692692 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.692722 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.692733 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.692742 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.695000 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.695399 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.695601 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.695645 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.695658 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b"} Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.696178 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.696219 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.696229 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.696245 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.696264 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.696273 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.820663 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.822384 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.822437 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.822455 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:08 crc kubenswrapper[5049]: I0127 16:57:08.822495 5049 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 16:57:08 crc kubenswrapper[5049]: E0127 16:57:08.823279 5049 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.191653 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.565633 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:18:08.613203465 +0000 UTC Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.702112 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"635645489a9dc5208a2f93206399716e2b6fc97aa376a2cc466e873b0bce0276"} Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.702580 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.709708 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.709785 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.709824 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.710541 5049 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b" exitCode=0 Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.710760 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.710825 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.710892 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b"} Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.711016 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.710935 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.711114 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712423 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712526 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712546 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712758 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712773 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712773 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712802 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712816 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712866 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712926 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:09 crc kubenswrapper[5049]: I0127 16:57:09.712948 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.556524 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.566020 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:31:08.835329171 +0000 UTC Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.718188 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a9044edd570a4cd74f54ae040c0d761124fc9a91d4a2472ccf7a560dca844dd"} Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.718271 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"60bc8f0eae510e45278f4b3ed7ac73074979861d314c8eebbf13f98cc5a63f56"} Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.718282 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.718292 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d88f46ed39c5a10bdef1ddff18757fc2476df93ece7b1913b60f4b22571f4e99"} Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.718305 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"235941479e8424cc9b1ab7c8d1447f18835a7e8a96369200d9d8d142190be06c"} Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.718344 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.719533 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.719586 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.719608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.719854 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.719899 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:10 crc kubenswrapper[5049]: I0127 16:57:10.719914 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.147851 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.386300 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.566350 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:21:15.007575002 +0000 UTC Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.728055 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c22cbeb1f4ce32c35cd0fbde6b0a6c6dfab4b8c814a84eac20ceb59416cf8baf"} Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.728145 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.728225 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.728821 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.732835 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.732959 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.732979 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.734035 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.734186 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.735344 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.735785 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.735856 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.735878 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:11 crc kubenswrapper[5049]: I0127 16:57:11.921176 5049 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.024337 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.026204 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.026239 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.026251 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.026278 5049 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.566802 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:06:59.579630084 +0000 UTC Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.632886 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.731715 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.731719 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.733409 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.733462 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.733479 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.733989 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.734038 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.734055 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.973439 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.973795 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.975512 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.975560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:12 crc kubenswrapper[5049]: I0127 16:57:12.975572 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.515354 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.567794 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:40:32.055719392 +0000 UTC Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.734769 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.734785 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.736213 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.736284 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.736325 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.736559 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.736613 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:13 crc kubenswrapper[5049]: I0127 16:57:13.736625 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:14 crc kubenswrapper[5049]: I0127 16:57:14.568166 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 11:55:55.050642802 +0000 UTC Jan 27 16:57:15 crc kubenswrapper[5049]: I0127 16:57:15.568302 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:00:17.354090873 +0000 UTC Jan 27 16:57:15 crc kubenswrapper[5049]: E0127 16:57:15.713963 5049 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 16:57:15 crc kubenswrapper[5049]: I0127 16:57:15.924425 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 16:57:15 crc kubenswrapper[5049]: I0127 16:57:15.924711 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:15 crc kubenswrapper[5049]: I0127 16:57:15.926211 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:15 crc kubenswrapper[5049]: I0127 16:57:15.926255 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:15 crc kubenswrapper[5049]: I0127 16:57:15.926272 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:16 crc kubenswrapper[5049]: I0127 16:57:16.569212 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:07:48.486273281 +0000 UTC Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.040322 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.040625 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.042416 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.042501 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.042529 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.045827 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.570354 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:55:10.535669349 +0000 UTC Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.744169 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.745574 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.745637 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:17 crc kubenswrapper[5049]: I0127 16:57:17.745651 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:18 crc kubenswrapper[5049]: I0127 16:57:18.571596 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:11:48.043657061 +0000 UTC Jan 27 16:57:19 crc kubenswrapper[5049]: W0127 16:57:19.463772 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 16:57:19 crc kubenswrapper[5049]: I0127 16:57:19.463879 5049 trace.go:236] Trace[1661524819]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 16:57:09.462) (total time: 10001ms): Jan 27 16:57:19 crc kubenswrapper[5049]: Trace[1661524819]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:57:19.463) Jan 27 16:57:19 crc kubenswrapper[5049]: Trace[1661524819]: [10.001820861s] [10.001820861s] END Jan 27 16:57:19 crc kubenswrapper[5049]: E0127 16:57:19.463914 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 16:57:19 crc kubenswrapper[5049]: I0127 16:57:19.561431 5049 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 16:57:19 crc kubenswrapper[5049]: I0127 16:57:19.598292 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:33:13.938946378 +0000 UTC Jan 27 16:57:19 crc kubenswrapper[5049]: W0127 16:57:19.825425 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 16:57:19 crc kubenswrapper[5049]: I0127 16:57:19.825548 5049 trace.go:236] Trace[1912203586]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 16:57:09.823) (total time: 10001ms): Jan 27 16:57:19 crc kubenswrapper[5049]: Trace[1912203586]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:57:19.825) Jan 27 16:57:19 crc kubenswrapper[5049]: Trace[1912203586]: [10.001600134s] [10.001600134s] END Jan 27 16:57:19 crc kubenswrapper[5049]: E0127 16:57:19.825595 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.040959 5049 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.041054 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 16:57:20 crc kubenswrapper[5049]: W0127 16:57:20.255569 5049 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.255737 5049 trace.go:236] Trace[1936869171]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 16:57:10.253) (total time: 10002ms): Jan 27 16:57:20 crc kubenswrapper[5049]: Trace[1936869171]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (16:57:20.255) Jan 27 16:57:20 crc kubenswrapper[5049]: Trace[1936869171]: [10.002229569s] [10.002229569s] END Jan 27 16:57:20 crc kubenswrapper[5049]: E0127 16:57:20.255768 5049 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.420263 5049 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.420346 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.425054 5049 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.425137 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.464797 5049 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.464879 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.557010 5049 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.557101 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.598927 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:54:49.816364323 +0000 UTC Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.753251 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.755089 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="635645489a9dc5208a2f93206399716e2b6fc97aa376a2cc466e873b0bce0276" exitCode=255 Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.755135 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"635645489a9dc5208a2f93206399716e2b6fc97aa376a2cc466e873b0bce0276"} Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.755271 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.756086 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.756120 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.756133 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:20 crc kubenswrapper[5049]: I0127 16:57:20.756796 5049 scope.go:117] "RemoveContainer" containerID="635645489a9dc5208a2f93206399716e2b6fc97aa376a2cc466e873b0bce0276" Jan 27 16:57:21 crc kubenswrapper[5049]: I0127 16:57:21.599427 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:41:35.050304043 +0000 UTC Jan 27 16:57:21 crc kubenswrapper[5049]: I0127 16:57:21.760845 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 16:57:21 crc kubenswrapper[5049]: I0127 16:57:21.764050 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071"} Jan 27 16:57:21 crc kubenswrapper[5049]: I0127 16:57:21.764314 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:21 crc kubenswrapper[5049]: I0127 16:57:21.765885 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:21 crc kubenswrapper[5049]: I0127 16:57:21.766076 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:21 crc kubenswrapper[5049]: I0127 16:57:21.766236 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.599869 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:41:24.597332022 +0000 UTC Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.655895 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.656182 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.657392 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.657430 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.657467 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.670150 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.767546 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.769119 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.769163 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:22 crc kubenswrapper[5049]: I0127 16:57:22.769181 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.521787 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.521993 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.522368 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.523338 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.523438 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.523513 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.527268 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.601382 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:25:52.626717471 +0000 UTC Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.769768 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.771371 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.771417 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:23 crc kubenswrapper[5049]: I0127 16:57:23.771432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:24 crc kubenswrapper[5049]: I0127 16:57:24.414234 5049 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 16:57:24 crc kubenswrapper[5049]: I0127 16:57:24.602303 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:07:15.070057559 +0000 UTC Jan 27 16:57:24 crc kubenswrapper[5049]: I0127 16:57:24.685772 5049 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 16:57:24 crc kubenswrapper[5049]: I0127 16:57:24.771649 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:24 crc kubenswrapper[5049]: I0127 16:57:24.772502 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:24 crc kubenswrapper[5049]: I0127 16:57:24.772560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:24 crc kubenswrapper[5049]: I0127 16:57:24.772594 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:24 crc kubenswrapper[5049]: I0127 16:57:24.880986 5049 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.400359 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.406643 5049 trace.go:236] Trace[1860731308]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 16:57:14.590) (total time: 10816ms): Jan 27 16:57:25 crc kubenswrapper[5049]: Trace[1860731308]: ---"Objects listed" error: 10816ms (16:57:25.406) Jan 27 16:57:25 crc kubenswrapper[5049]: Trace[1860731308]: [10.816180264s] [10.816180264s] END Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.406698 5049 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.406654 5049 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.412278 5049 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.431299 5049 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.556507 5049 apiserver.go:52] "Watching apiserver" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.560907 5049 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.561343 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.561886 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.562009 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.562145 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.562453 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.562633 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.562654 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.562879 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.562989 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.562935 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.567099 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.567408 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.567776 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.567809 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.566158 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.568389 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.569029 5049 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.571452 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.571713 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.571940 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.603195 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:56:40.344834068 +0000 UTC Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.605872 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613635 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613704 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613730 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613749 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613766 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613786 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613803 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613817 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613836 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613852 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613871 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613890 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613905 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613920 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613936 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613952 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.613968 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614082 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614100 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.614304 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:57:26.114277514 +0000 UTC m=+21.213251063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614399 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614419 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614499 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614542 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614561 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614578 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614597 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614549 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614638 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614774 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614861 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614907 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614948 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614991 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615065 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615126 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615177 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615433 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615517 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615579 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615646 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615739 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615799 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615838 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615882 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615919 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615955 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615993 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616035 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616089 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616127 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616164 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616281 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616318 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616353 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616395 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616431 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616466 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616505 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616554 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616588 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616626 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616663 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616730 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616767 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616828 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616989 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617055 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617119 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617165 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617222 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617295 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617362 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617416 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617471 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617536 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617593 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617653 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617738 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617790 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617849 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617906 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617983 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618049 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618101 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618161 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618220 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618285 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618344 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618403 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618455 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618508 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618564 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618619 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614727 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618705 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618764 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618839 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618882 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618936 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618946 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618903 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615391 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615837 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.614907 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616087 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616116 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616376 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616580 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616871 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.616997 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617051 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617197 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617480 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617616 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617617 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617719 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.617972 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618007 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618140 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618596 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.615864 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619509 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619536 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.618897 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619794 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619817 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619838 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619863 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619886 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619906 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619962 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.619982 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620001 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620020 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620038 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620069 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620089 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620109 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620127 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620149 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620166 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620162 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620185 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620218 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620241 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620396 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620427 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620454 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620480 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620583 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620605 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620630 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620649 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620684 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620706 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620725 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620744 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620764 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620785 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620803 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620820 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620841 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620860 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620883 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620902 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620920 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620939 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620962 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620980 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620998 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621017 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621037 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621177 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621206 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621229 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621247 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621265 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621288 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621306 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621327 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621345 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621361 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621380 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621398 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621419 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621439 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621455 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621474 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621490 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621505 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621520 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621537 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621558 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621577 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621596 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621613 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621635 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621654 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621689 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621707 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621732 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621755 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621774 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621797 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621816 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621836 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621854 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621872 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621893 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621917 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621937 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621957 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621976 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621950 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621952 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.621995 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622218 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622264 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622286 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622346 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622352 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622304 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622404 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622440 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622462 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622483 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622503 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622529 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622535 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622552 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622574 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622628 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622655 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622693 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622714 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622717 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622737 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622843 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622882 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622926 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622959 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.622987 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623025 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623107 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623135 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623157 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623185 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623331 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623381 5049 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623421 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.623441 5049 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623570 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.623938 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:26.12391422 +0000 UTC m=+21.222887769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.624340 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.624495 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.624839 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.624927 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.624996 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.625209 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.625279 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.625437 5049 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.625666 5049 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.625755 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:26.125734643 +0000 UTC m=+21.224708192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.626131 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.626183 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.626339 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.627493 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.628637 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.628976 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.629042 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.629106 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.629071 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.629637 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.629656 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.630470 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.620126 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.630702 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.623446 5049 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.630915 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.630940 5049 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631027 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631052 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631068 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631082 5049 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631095 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631107 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631123 5049 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631135 5049 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631146 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631160 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631172 5049 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631183 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631177 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631199 5049 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631212 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631223 5049 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631235 5049 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631247 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631257 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631267 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631278 5049 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631289 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631303 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631321 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631335 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631346 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631356 5049 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631367 5049 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631385 5049 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631400 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631420 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631436 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631451 5049 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631715 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631730 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.631967 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.632209 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.632477 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.633322 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.635356 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.635442 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.635923 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.635946 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.635953 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.636142 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.636228 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.636542 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.636852 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.637090 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.637498 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.637526 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.638081 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.638134 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.638196 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.638525 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.638560 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.639038 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.639641 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.639747 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.639888 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.640016 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.640165 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.640186 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.640200 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.640764 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.640780 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641000 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.640857 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641377 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641560 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641570 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641631 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641634 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641690 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641720 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641703 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.641963 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.642062 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.642133 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.642465 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.642535 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.642742 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.642961 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.643172 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.643079 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.643513 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.643533 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.643716 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.644110 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.644224 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.644427 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.644817 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.651770 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.651904 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.652710 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.653151 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.653299 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.653369 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.653239 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.653470 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.653564 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.653941 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.653968 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.653910 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.653986 5049 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.654067 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:26.154044458 +0000 UTC m=+21.253017997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.654193 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.654222 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.654237 5049 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:25 crc kubenswrapper[5049]: E0127 16:57:25.654308 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:26.154281384 +0000 UTC m=+21.253254943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.654335 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.654688 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.654828 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.654979 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.655090 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.655321 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.655565 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.655599 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.655737 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.656516 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.656624 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.657042 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.657072 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.657277 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.657382 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.657403 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.657754 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.657785 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.658165 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.658524 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.658832 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.658923 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.659477 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.659507 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.659626 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.659745 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.659808 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.659987 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.660006 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.661367 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.663222 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.663346 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.663580 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.663601 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.658690 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.665558 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.663789 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.663896 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.665977 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.663975 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.664603 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.666936 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.664931 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.654849 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.665778 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.666294 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.666205 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.666377 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.667985 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.662029 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.666213 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.668304 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.668367 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.668484 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.668536 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.668836 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.668950 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.669186 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.669912 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.670141 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.671015 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.671151 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.671806 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.672070 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.672557 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.675290 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.676268 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.677278 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.677307 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.679242 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.680960 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.681261 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.681589 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.681475 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.684272 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.685345 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.689210 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.693637 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.698884 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.702221 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.704241 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.713505 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.714253 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.717494 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.717522 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.718350 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.719502 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.720084 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.723196 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.727140 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.728233 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.728891 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.731050 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.731282 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732035 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732073 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732165 5049 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732185 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732196 5049 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732206 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732215 5049 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732225 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732235 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732245 5049 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732254 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732263 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732273 5049 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732284 5049 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732293 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732303 5049 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732313 5049 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732321 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734709 5049 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734729 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734778 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734790 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734800 5049 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.732689 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734810 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734901 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734911 5049 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734923 5049 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734942 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734952 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734963 5049 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734973 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734983 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734993 5049 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735004 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735015 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735025 5049 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735034 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735044 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735056 5049 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735065 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735111 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735125 5049 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735145 5049 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735160 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735174 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735187 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735202 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735217 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735228 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735237 5049 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735246 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735255 5049 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735268 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735277 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735286 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735297 5049 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735307 5049 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735316 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735326 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735336 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735345 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735354 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735364 5049 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735374 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735384 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735394 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735403 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735413 5049 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735422 5049 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735432 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735442 5049 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735451 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735462 5049 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735472 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735483 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735518 5049 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735529 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735538 5049 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735547 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735557 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735569 5049 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735579 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735589 5049 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735599 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735608 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735636 5049 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735646 5049 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735657 5049 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735684 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735696 5049 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735705 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735715 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735725 5049 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735735 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735744 5049 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735753 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735763 5049 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735772 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735781 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735790 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735802 5049 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735812 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735821 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735831 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735840 5049 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735849 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735858 5049 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735868 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735876 5049 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735885 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735894 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735903 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735913 5049 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735922 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735932 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735941 5049 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735953 5049 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735963 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735972 5049 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735981 5049 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735990 5049 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.735998 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736007 5049 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736016 5049 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736024 5049 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736034 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736043 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736053 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736061 5049 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736078 5049 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736087 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736095 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736105 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736114 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736126 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736135 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736144 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736153 5049 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736164 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736174 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736183 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736194 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736205 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736213 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736222 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736231 5049 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736240 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736249 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736262 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736276 5049 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736289 5049 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736308 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.733434 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.734202 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736381 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736412 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736434 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736447 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736459 5049 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736469 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.736591 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.737024 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.738251 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.738359 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.739802 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.740699 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.741802 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.743481 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.744768 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.745504 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.746448 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.746853 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.747367 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.747884 5049 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.747992 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.750090 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.750579 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.751028 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.752969 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.753583 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.754131 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.755193 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.756209 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.756690 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.757175 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.757784 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.758812 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.759585 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.760510 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.761098 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.761964 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.762708 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.763556 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.764034 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.765031 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.765716 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.766289 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.766287 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.767166 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.774586 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.783062 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.793245 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.803184 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.837211 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.837301 5049 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.886926 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.899813 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 16:57:25 crc kubenswrapper[5049]: W0127 16:57:25.902361 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-93b3eca146f54d7c3545c91acc75fcdd0e4834608f26624352ff961a5fb0bac4 WatchSource:0}: Error finding container 93b3eca146f54d7c3545c91acc75fcdd0e4834608f26624352ff961a5fb0bac4: Status 404 returned error can't find the container with id 93b3eca146f54d7c3545c91acc75fcdd0e4834608f26624352ff961a5fb0bac4 Jan 27 16:57:25 crc kubenswrapper[5049]: I0127 16:57:25.911432 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.140146 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.140169 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:57:27.140144191 +0000 UTC m=+22.239117740 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.140330 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.140374 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.140476 5049 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.140510 5049 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.140525 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:27.14051793 +0000 UTC m=+22.239491479 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.140638 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:27.140613852 +0000 UTC m=+22.239587401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.241687 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.241729 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.241854 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.241869 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.241880 5049 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.241931 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:27.241912232 +0000 UTC m=+22.340885781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.242372 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.242393 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.242403 5049 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.242459 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:27.242451085 +0000 UTC m=+22.341424634 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.603368 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:39:41.301770272 +0000 UTC Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.645154 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:26 crc kubenswrapper[5049]: E0127 16:57:26.645339 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.711311 5049 csr.go:261] certificate signing request csr-tbvb4 is approved, waiting to be issued Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.721120 5049 csr.go:257] certificate signing request csr-tbvb4 is issued Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.779045 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4eeb03223a6acfb5e2501d77c337869a55f73a70c44781b1840641583d946f6b"} Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.782821 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce"} Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.782908 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2"} Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.782926 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"402fd3154a9252e18894fa5d94e06b8482e80576411ce786ab94745412c5caec"} Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.785393 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518"} Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.785439 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"93b3eca146f54d7c3545c91acc75fcdd0e4834608f26624352ff961a5fb0bac4"} Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.802187 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.816595 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.838924 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.852632 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.864489 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.887171 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.905231 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.923398 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.943510 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.970298 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:26 crc kubenswrapper[5049]: I0127 16:57:26.989596 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.003616 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.053249 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.065863 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.076273 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.092813 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.112732 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.125700 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.142057 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.148185 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.148293 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.148326 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.148452 5049 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.148453 5049 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.148516 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:29.148500056 +0000 UTC m=+24.247473605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.148540 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:29.148531497 +0000 UTC m=+24.247505046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.148588 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:57:29.148547537 +0000 UTC m=+24.247521086 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.148646 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.160096 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.169799 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.180398 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.196612 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.215378 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.232569 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.249527 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.249595 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.249750 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.249768 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.249781 5049 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.249831 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:29.249813897 +0000 UTC m=+24.348787446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.250175 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.250188 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.250198 5049 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.250222 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:29.250213986 +0000 UTC m=+24.349187535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.251742 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-l8gpm"] Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.252021 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l8gpm" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.253301 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.253752 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.256627 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.262156 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.280402 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.302110 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.336803 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.350185 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnlbh\" (UniqueName: \"kubernetes.io/projected/6bf0a52b-305e-49f5-b397-c66ec99f3d8c-kube-api-access-qnlbh\") pod \"node-resolver-l8gpm\" (UID: \"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\") " pod="openshift-dns/node-resolver-l8gpm" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.350238 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6bf0a52b-305e-49f5-b397-c66ec99f3d8c-hosts-file\") pod \"node-resolver-l8gpm\" (UID: \"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\") " pod="openshift-dns/node-resolver-l8gpm" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.355749 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.369331 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.384352 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.408737 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.435139 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.451277 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6bf0a52b-305e-49f5-b397-c66ec99f3d8c-hosts-file\") pod \"node-resolver-l8gpm\" (UID: \"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\") " pod="openshift-dns/node-resolver-l8gpm" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.451357 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnlbh\" (UniqueName: \"kubernetes.io/projected/6bf0a52b-305e-49f5-b397-c66ec99f3d8c-kube-api-access-qnlbh\") pod \"node-resolver-l8gpm\" (UID: \"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\") " pod="openshift-dns/node-resolver-l8gpm" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.451493 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6bf0a52b-305e-49f5-b397-c66ec99f3d8c-hosts-file\") pod \"node-resolver-l8gpm\" (UID: \"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\") " pod="openshift-dns/node-resolver-l8gpm" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.458950 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.559269 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnlbh\" (UniqueName: \"kubernetes.io/projected/6bf0a52b-305e-49f5-b397-c66ec99f3d8c-kube-api-access-qnlbh\") pod \"node-resolver-l8gpm\" (UID: \"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\") " pod="openshift-dns/node-resolver-l8gpm" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.565710 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l8gpm" Jan 27 16:57:27 crc kubenswrapper[5049]: W0127 16:57:27.581949 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bf0a52b_305e_49f5_b397_c66ec99f3d8c.slice/crio-2017ec288eea5683e7e6f9ed4f67741591d024a8a46b2211289393a77bb5a58e WatchSource:0}: Error finding container 2017ec288eea5683e7e6f9ed4f67741591d024a8a46b2211289393a77bb5a58e: Status 404 returned error can't find the container with id 2017ec288eea5683e7e6f9ed4f67741591d024a8a46b2211289393a77bb5a58e Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.605311 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:21:31.366797743 +0000 UTC Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.645697 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.645789 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.645915 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.646092 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.651728 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2zsnk"] Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.652473 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hc4th"] Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.653130 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.655878 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2d7n9"] Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.656114 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.656785 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.656904 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.657158 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.657226 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.657296 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.657371 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.658184 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.658911 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.661475 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.661751 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.678486 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.678582 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.678792 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.701556 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.722720 5049 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 16:52:26 +0000 UTC, rotation deadline is 2026-12-13 20:14:32.618641502 +0000 UTC Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.722788 5049 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7683h17m4.895855601s for next certificate rotation Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.730762 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754259 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754321 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-k8s-cni-cncf-io\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754347 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-etc-kubernetes\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754374 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-daemon-config\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754399 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-cnibin\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754454 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl9s6\" (UniqueName: \"kubernetes.io/projected/63d094db-b027-49de-8ac0-427f5cd179e6-kube-api-access-dl9s6\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754514 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-system-cni-dir\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754535 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-cni-bin\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754554 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/63d094db-b027-49de-8ac0-427f5cd179e6-cni-binary-copy\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754575 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-multus-certs\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754734 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63d094db-b027-49de-8ac0-427f5cd179e6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754830 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvr84\" (UniqueName: \"kubernetes.io/projected/b714597d-68b8-4f8f-9d55-9f1cea23324a-kube-api-access-mvr84\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754882 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-cni-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754925 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-os-release\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.754967 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-kubelet\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755009 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbm7\" (UniqueName: \"kubernetes.io/projected/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-kube-api-access-rbbm7\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755063 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-cnibin\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755097 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-hostroot\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755122 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-conf-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755182 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-socket-dir-parent\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755226 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-cni-multus\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755254 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b714597d-68b8-4f8f-9d55-9f1cea23324a-rootfs\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755282 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-system-cni-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755331 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-netns\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755359 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-os-release\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755385 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-cni-binary-copy\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755445 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b714597d-68b8-4f8f-9d55-9f1cea23324a-proxy-tls\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755476 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b714597d-68b8-4f8f-9d55-9f1cea23324a-mcd-auth-proxy-config\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.755668 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.774611 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.793192 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l8gpm" event={"ID":"6bf0a52b-305e-49f5-b397-c66ec99f3d8c","Type":"ContainerStarted","Data":"056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1"} Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.793416 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l8gpm" event={"ID":"6bf0a52b-305e-49f5-b397-c66ec99f3d8c","Type":"ContainerStarted","Data":"2017ec288eea5683e7e6f9ed4f67741591d024a8a46b2211289393a77bb5a58e"} Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.793438 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.795994 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.796548 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.798496 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071" exitCode=255 Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.799016 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071"} Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.799076 5049 scope.go:117] "RemoveContainer" containerID="635645489a9dc5208a2f93206399716e2b6fc97aa376a2cc466e873b0bce0276" Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.811830 5049 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.812022 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.814841 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.815288 5049 scope.go:117] "RemoveContainer" containerID="db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071" Jan 27 16:57:27 crc kubenswrapper[5049]: E0127 16:57:27.815501 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.829156 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.843158 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.856778 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-multus-certs\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.856835 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63d094db-b027-49de-8ac0-427f5cd179e6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.856864 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvr84\" (UniqueName: \"kubernetes.io/projected/b714597d-68b8-4f8f-9d55-9f1cea23324a-kube-api-access-mvr84\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.856888 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-cni-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.856911 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-os-release\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.856945 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbm7\" (UniqueName: \"kubernetes.io/projected/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-kube-api-access-rbbm7\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.856973 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-kubelet\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.856995 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-cnibin\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857014 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-hostroot\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857039 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-conf-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857062 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-system-cni-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857084 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-socket-dir-parent\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857107 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-cni-multus\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857131 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b714597d-68b8-4f8f-9d55-9f1cea23324a-rootfs\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857166 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-netns\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857190 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-os-release\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857220 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-cni-binary-copy\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857252 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b714597d-68b8-4f8f-9d55-9f1cea23324a-proxy-tls\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857276 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b714597d-68b8-4f8f-9d55-9f1cea23324a-mcd-auth-proxy-config\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857305 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-k8s-cni-cncf-io\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857330 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857352 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-etc-kubernetes\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857377 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-daemon-config\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857399 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-cnibin\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857433 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-system-cni-dir\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857461 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl9s6\" (UniqueName: \"kubernetes.io/projected/63d094db-b027-49de-8ac0-427f5cd179e6-kube-api-access-dl9s6\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857482 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-cni-bin\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.857505 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/63d094db-b027-49de-8ac0-427f5cd179e6-cni-binary-copy\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.858329 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/63d094db-b027-49de-8ac0-427f5cd179e6-cni-binary-copy\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.858402 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-multus-certs\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.858840 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/63d094db-b027-49de-8ac0-427f5cd179e6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.859318 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-cni-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.859552 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-os-release\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.859744 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-kubelet\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.859792 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-cnibin\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.859821 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-hostroot\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.859852 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-conf-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.859881 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-k8s-cni-cncf-io\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.859913 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-system-cni-dir\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860003 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-cnibin\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860101 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-cni-bin\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860143 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b714597d-68b8-4f8f-9d55-9f1cea23324a-rootfs\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860144 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860210 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-socket-dir-parent\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860239 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-var-lib-cni-multus\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860280 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-os-release\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860304 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-host-run-netns\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860183 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-system-cni-dir\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860770 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/63d094db-b027-49de-8ac0-427f5cd179e6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860823 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-etc-kubernetes\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860886 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b714597d-68b8-4f8f-9d55-9f1cea23324a-mcd-auth-proxy-config\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.860997 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-cni-binary-copy\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.861580 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-multus-daemon-config\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.870279 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b714597d-68b8-4f8f-9d55-9f1cea23324a-proxy-tls\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.877633 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbm7\" (UniqueName: \"kubernetes.io/projected/7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b-kube-api-access-rbbm7\") pod \"multus-hc4th\" (UID: \"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\") " pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.879359 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl9s6\" (UniqueName: \"kubernetes.io/projected/63d094db-b027-49de-8ac0-427f5cd179e6-kube-api-access-dl9s6\") pod \"multus-additional-cni-plugins-2zsnk\" (UID: \"63d094db-b027-49de-8ac0-427f5cd179e6\") " pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.885480 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.894341 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvr84\" (UniqueName: \"kubernetes.io/projected/b714597d-68b8-4f8f-9d55-9f1cea23324a-kube-api-access-mvr84\") pod \"machine-config-daemon-2d7n9\" (UID: \"b714597d-68b8-4f8f-9d55-9f1cea23324a\") " pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.898378 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.916792 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.930293 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.943472 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635645489a9dc5208a2f93206399716e2b6fc97aa376a2cc466e873b0bce0276\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:19Z\\\",\\\"message\\\":\\\"W0127 16:57:08.837223 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 16:57:08.837604 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769533028 cert, and key in /tmp/serving-cert-657694708/serving-signer.crt, /tmp/serving-cert-657694708/serving-signer.key\\\\nI0127 16:57:09.229948 1 observer_polling.go:159] Starting file observer\\\\nW0127 16:57:09.233461 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 16:57:09.233758 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:09.235796 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-657694708/tls.crt::/tmp/serving-cert-657694708/tls.key\\\\\\\"\\\\nF0127 16:57:19.734090 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.965488 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.979430 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.986043 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:27Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.989408 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hc4th" Jan 27 16:57:27 crc kubenswrapper[5049]: W0127 16:57:27.991941 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63d094db_b027_49de_8ac0_427f5cd179e6.slice/crio-7e5d9423780b696363f86da15316368ff5bb720fd2fb91cf4f9fe310f602d641 WatchSource:0}: Error finding container 7e5d9423780b696363f86da15316368ff5bb720fd2fb91cf4f9fe310f602d641: Status 404 returned error can't find the container with id 7e5d9423780b696363f86da15316368ff5bb720fd2fb91cf4f9fe310f602d641 Jan 27 16:57:27 crc kubenswrapper[5049]: I0127 16:57:27.995816 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.005543 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.024861 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.038492 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.058620 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.069422 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmzbf"] Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.070291 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.075652 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.075695 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.075746 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.075822 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.076053 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.076425 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.076733 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.078190 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.106755 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.137508 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.161952 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-systemd\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162028 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-log-socket\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162049 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162069 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162087 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-netns\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162104 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pflv\" (UniqueName: \"kubernetes.io/projected/b0ca704c-b740-43c4-845f-7de5bfa5a29c-kube-api-access-6pflv\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162131 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-etc-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162154 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-var-lib-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162170 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162186 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-node-log\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162206 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-config\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162222 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-env-overrides\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162239 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-kubelet\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162253 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-ovn\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162274 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162292 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-script-lib\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162315 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-systemd-units\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162334 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-slash\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162350 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-bin\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.162367 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-netd\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.166554 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.196877 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.245666 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635645489a9dc5208a2f93206399716e2b6fc97aa376a2cc466e873b0bce0276\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:19Z\\\",\\\"message\\\":\\\"W0127 16:57:08.837223 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 16:57:08.837604 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769533028 cert, and key in /tmp/serving-cert-657694708/serving-signer.crt, /tmp/serving-cert-657694708/serving-signer.key\\\\nI0127 16:57:09.229948 1 observer_polling.go:159] Starting file observer\\\\nW0127 16:57:09.233461 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 16:57:09.233758 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:09.235796 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-657694708/tls.crt::/tmp/serving-cert-657694708/tls.key\\\\\\\"\\\\nF0127 16:57:19.734090 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264592 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-netns\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264684 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pflv\" (UniqueName: \"kubernetes.io/projected/b0ca704c-b740-43c4-845f-7de5bfa5a29c-kube-api-access-6pflv\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264723 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-etc-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264757 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-var-lib-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264818 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264845 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-node-log\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264875 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-config\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264904 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-env-overrides\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264955 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-kubelet\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.264978 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-ovn\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265005 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265033 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-script-lib\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265064 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-slash\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265092 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-systemd-units\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265112 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-bin\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265150 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-netd\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265175 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-systemd\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265225 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-log-socket\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265252 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265282 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265858 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-ovn\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.265972 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-netns\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.266378 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-etc-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.266424 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-var-lib-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.266456 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-openvswitch\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.266484 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-node-log\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267217 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-config\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267589 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-env-overrides\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267611 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-kubelet\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267705 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-bin\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267759 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-systemd\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267819 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-netd\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267857 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-slash\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267862 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-systemd-units\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267899 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267909 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-log-socket\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.267944 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.268116 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-script-lib\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.271962 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.288388 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.298608 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pflv\" (UniqueName: \"kubernetes.io/projected/b0ca704c-b740-43c4-845f-7de5bfa5a29c-kube-api-access-6pflv\") pod \"ovnkube-node-zmzbf\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.306516 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.320028 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.334477 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.352536 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.370844 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.393509 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.408659 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.419050 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:28 crc kubenswrapper[5049]: W0127 16:57:28.435124 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0ca704c_b740_43c4_845f_7de5bfa5a29c.slice/crio-69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06 WatchSource:0}: Error finding container 69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06: Status 404 returned error can't find the container with id 69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06 Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.605840 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 10:39:22.827918834 +0000 UTC Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.645312 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:28 crc kubenswrapper[5049]: E0127 16:57:28.645575 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.802947 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.805751 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.805807 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.805821 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"9da1d938db9aa77c0a4db3d8c2303839a342f1ab2ce2c95e9ef409a4fb3afb84"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.807939 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hc4th" event={"ID":"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b","Type":"ContainerStarted","Data":"b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.807977 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hc4th" event={"ID":"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b","Type":"ContainerStarted","Data":"330c44198e02a406826ea32994b48bd50cb3f73387c72d9e54ae4f86ea04d3ee"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.810140 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.812942 5049 scope.go:117] "RemoveContainer" containerID="db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071" Jan 27 16:57:28 crc kubenswrapper[5049]: E0127 16:57:28.813162 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.813721 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" exitCode=0 Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.813795 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.813830 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.815637 5049 generic.go:334] "Generic (PLEG): container finished" podID="63d094db-b027-49de-8ac0-427f5cd179e6" containerID="719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00" exitCode=0 Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.815716 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" event={"ID":"63d094db-b027-49de-8ac0-427f5cd179e6","Type":"ContainerDied","Data":"719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.815775 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" event={"ID":"63d094db-b027-49de-8ac0-427f5cd179e6","Type":"ContainerStarted","Data":"7e5d9423780b696363f86da15316368ff5bb720fd2fb91cf4f9fe310f602d641"} Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.865046 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.889575 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.948470 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.969746 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:28 crc kubenswrapper[5049]: I0127 16:57:28.988663 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:28Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.018690 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.036198 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.051484 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.091716 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.113725 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.127369 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.144310 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://635645489a9dc5208a2f93206399716e2b6fc97aa376a2cc466e873b0bce0276\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:19Z\\\",\\\"message\\\":\\\"W0127 16:57:08.837223 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 16:57:08.837604 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769533028 cert, and key in /tmp/serving-cert-657694708/serving-signer.crt, /tmp/serving-cert-657694708/serving-signer.key\\\\nI0127 16:57:09.229948 1 observer_polling.go:159] Starting file observer\\\\nW0127 16:57:09.233461 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 16:57:09.233758 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:09.235796 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-657694708/tls.crt::/tmp/serving-cert-657694708/tls.key\\\\\\\"\\\\nF0127 16:57:19.734090 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.156458 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.173027 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.176107 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.176233 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:57:33.176202125 +0000 UTC m=+28.275175674 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.176793 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.176903 5049 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.176965 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:33.176950513 +0000 UTC m=+28.275924062 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.176987 5049 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.177038 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:33.177028245 +0000 UTC m=+28.276001794 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.176823 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.196237 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.209401 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.223540 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.239772 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.260781 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.275158 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.277936 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.278130 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.278333 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.278425 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.278520 5049 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.278661 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:33.278638502 +0000 UTC m=+28.377612051 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.278855 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.278896 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.278911 5049 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.278967 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:33.27894909 +0000 UTC m=+28.377922639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.295763 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.309455 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.326633 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.343367 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.360166 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.375319 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.606757 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:04:44.757273792 +0000 UTC Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.645344 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.645439 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.645584 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:29 crc kubenswrapper[5049]: E0127 16:57:29.646013 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.836624 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd"} Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.837352 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd"} Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.837380 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030"} Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.837414 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f"} Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.838568 5049 generic.go:334] "Generic (PLEG): container finished" podID="63d094db-b027-49de-8ac0-427f5cd179e6" containerID="26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006" exitCode=0 Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.838634 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" event={"ID":"63d094db-b027-49de-8ac0-427f5cd179e6","Type":"ContainerDied","Data":"26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006"} Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.855220 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.875151 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.899624 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.916138 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.936814 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.954402 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.978628 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.988419 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dzlsl"] Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.988920 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.991690 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.991842 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.991955 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.992019 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 16:57:29 crc kubenswrapper[5049]: I0127 16:57:29.998910 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:29Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.012252 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.026757 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.041326 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.057754 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.070739 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.087518 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.087723 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a38a905c-ad0d-4656-a52c-fdf82d861c2e-serviceca\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.087929 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qwg8\" (UniqueName: \"kubernetes.io/projected/a38a905c-ad0d-4656-a52c-fdf82d861c2e-kube-api-access-4qwg8\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.088004 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a38a905c-ad0d-4656-a52c-fdf82d861c2e-host\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.106277 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.128024 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.140384 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.155041 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.170773 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.185414 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.190320 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qwg8\" (UniqueName: \"kubernetes.io/projected/a38a905c-ad0d-4656-a52c-fdf82d861c2e-kube-api-access-4qwg8\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.190557 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a38a905c-ad0d-4656-a52c-fdf82d861c2e-host\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.190663 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a38a905c-ad0d-4656-a52c-fdf82d861c2e-serviceca\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.190841 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a38a905c-ad0d-4656-a52c-fdf82d861c2e-host\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.191993 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a38a905c-ad0d-4656-a52c-fdf82d861c2e-serviceca\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.203111 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.221450 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qwg8\" (UniqueName: \"kubernetes.io/projected/a38a905c-ad0d-4656-a52c-fdf82d861c2e-kube-api-access-4qwg8\") pod \"node-ca-dzlsl\" (UID: \"a38a905c-ad0d-4656-a52c-fdf82d861c2e\") " pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.221702 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.234222 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.246505 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.262794 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.281128 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.291473 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.313002 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dzlsl" Jan 27 16:57:30 crc kubenswrapper[5049]: W0127 16:57:30.327487 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda38a905c_ad0d_4656_a52c_fdf82d861c2e.slice/crio-edbe719176e02754a1778bd919fae6de05f3b71d73d27d8510a668a8aa0a7bbf WatchSource:0}: Error finding container edbe719176e02754a1778bd919fae6de05f3b71d73d27d8510a668a8aa0a7bbf: Status 404 returned error can't find the container with id edbe719176e02754a1778bd919fae6de05f3b71d73d27d8510a668a8aa0a7bbf Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.464277 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.464965 5049 scope.go:117] "RemoveContainer" containerID="db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071" Jan 27 16:57:30 crc kubenswrapper[5049]: E0127 16:57:30.465127 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.608188 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:49:48.95370859 +0000 UTC Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.645747 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:30 crc kubenswrapper[5049]: E0127 16:57:30.645887 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.847466 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9"} Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.847537 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8"} Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.850592 5049 generic.go:334] "Generic (PLEG): container finished" podID="63d094db-b027-49de-8ac0-427f5cd179e6" containerID="c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37" exitCode=0 Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.850643 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" event={"ID":"63d094db-b027-49de-8ac0-427f5cd179e6","Type":"ContainerDied","Data":"c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37"} Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.852746 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dzlsl" event={"ID":"a38a905c-ad0d-4656-a52c-fdf82d861c2e","Type":"ContainerStarted","Data":"91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b"} Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.852775 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dzlsl" event={"ID":"a38a905c-ad0d-4656-a52c-fdf82d861c2e","Type":"ContainerStarted","Data":"edbe719176e02754a1778bd919fae6de05f3b71d73d27d8510a668a8aa0a7bbf"} Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.872340 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.888133 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.907899 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.933065 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.953740 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.970864 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:30 crc kubenswrapper[5049]: I0127 16:57:30.984834 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.000253 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:30Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.018028 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.035475 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.056795 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.074421 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.091252 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.105947 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.118096 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.136173 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.153348 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.176835 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.189984 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.204394 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.218756 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.232822 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.245464 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.260378 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.293726 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.344951 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.378057 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.415783 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.609872 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:47:27.746479629 +0000 UTC Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.645534 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.645548 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:31 crc kubenswrapper[5049]: E0127 16:57:31.645743 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:31 crc kubenswrapper[5049]: E0127 16:57:31.645853 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.806839 5049 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.810168 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.810217 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.810227 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.810317 5049 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.821741 5049 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.822332 5049 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.823926 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.823970 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.823983 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.824000 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.824013 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:31Z","lastTransitionTime":"2026-01-27T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:31 crc kubenswrapper[5049]: E0127 16:57:31.839404 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.844642 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.844685 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.844728 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.844751 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.844766 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:31Z","lastTransitionTime":"2026-01-27T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.860590 5049 generic.go:334] "Generic (PLEG): container finished" podID="63d094db-b027-49de-8ac0-427f5cd179e6" containerID="4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5" exitCode=0 Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.860776 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" event={"ID":"63d094db-b027-49de-8ac0-427f5cd179e6","Type":"ContainerDied","Data":"4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5"} Jan 27 16:57:31 crc kubenswrapper[5049]: E0127 16:57:31.863342 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.870486 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.870526 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.870538 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.870555 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.870567 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:31Z","lastTransitionTime":"2026-01-27T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.880389 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: E0127 16:57:31.895353 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.899246 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.900685 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.900796 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.900824 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.900857 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.900886 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:31Z","lastTransitionTime":"2026-01-27T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:31 crc kubenswrapper[5049]: E0127 16:57:31.917327 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.922259 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.922295 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.922310 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.922329 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.922341 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:31Z","lastTransitionTime":"2026-01-27T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.925996 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.941598 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: E0127 16:57:31.943442 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: E0127 16:57:31.943875 5049 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.946788 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.946815 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.946828 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.946846 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.946859 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:31Z","lastTransitionTime":"2026-01-27T16:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.962010 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.975940 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:31 crc kubenswrapper[5049]: I0127 16:57:31.991297 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:31Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.004461 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.022233 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.038498 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.051130 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.051183 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.051196 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.051216 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.051228 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.062581 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.077448 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.126181 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.153589 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.153616 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.153625 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.153641 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.153651 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.163043 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.259878 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.259934 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.259947 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.259965 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.259976 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.364073 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.364113 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.364122 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.364140 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.364155 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.467493 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.467531 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.467540 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.467556 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.467566 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.571706 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.571770 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.571786 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.571815 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.571830 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.610459 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:06:27.833318499 +0000 UTC Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.645417 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:32 crc kubenswrapper[5049]: E0127 16:57:32.645640 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.676646 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.676720 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.676733 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.676756 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.676768 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.818865 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.818923 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.818941 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.818972 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.818992 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.872031 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.876104 5049 generic.go:334] "Generic (PLEG): container finished" podID="63d094db-b027-49de-8ac0-427f5cd179e6" containerID="fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973" exitCode=0 Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.876161 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" event={"ID":"63d094db-b027-49de-8ac0-427f5cd179e6","Type":"ContainerDied","Data":"fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.910275 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.922179 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.922225 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.922237 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.922256 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.922271 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:32Z","lastTransitionTime":"2026-01-27T16:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.941563 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.966270 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:32 crc kubenswrapper[5049]: I0127 16:57:32.986898 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:32Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.008984 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.022873 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.026892 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.026944 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.026960 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.026987 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.027020 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.039954 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.055687 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.071952 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.087351 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.116197 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.131110 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.132829 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.132882 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.132891 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.132906 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.132914 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.149084 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.166794 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.225377 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.225578 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.225642 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:57:41.225607051 +0000 UTC m=+36.324580640 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.225737 5049 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.225824 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:41.225803175 +0000 UTC m=+36.324776764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.225737 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.225855 5049 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.226101 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:41.226074642 +0000 UTC m=+36.325048201 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.237473 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.237506 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.237519 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.237536 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.237547 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.326901 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.327153 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.327810 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.327874 5049 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.329098 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.329240 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:41.329198335 +0000 UTC m=+36.428171914 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.329511 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.329596 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.329632 5049 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.329813 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:41.329774139 +0000 UTC m=+36.428747728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.340861 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.340934 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.340947 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.340976 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.341020 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.444514 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.444609 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.444629 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.444662 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.444735 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.547433 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.547476 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.547489 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.547506 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.547517 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.611371 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:15:14.480112497 +0000 UTC Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.645335 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.645499 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.646098 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:33 crc kubenswrapper[5049]: E0127 16:57:33.646180 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.650884 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.650934 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.650945 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.650972 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.650983 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.754666 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.754729 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.754741 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.754764 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.754781 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.858353 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.858395 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.858407 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.858428 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.858443 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.886872 5049 generic.go:334] "Generic (PLEG): container finished" podID="63d094db-b027-49de-8ac0-427f5cd179e6" containerID="317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34" exitCode=0 Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.886945 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" event={"ID":"63d094db-b027-49de-8ac0-427f5cd179e6","Type":"ContainerDied","Data":"317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.911656 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.927424 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.952369 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.961161 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.961214 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.961226 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.961247 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.961267 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:33Z","lastTransitionTime":"2026-01-27T16:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.974742 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:33 crc kubenswrapper[5049]: I0127 16:57:33.990932 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.005732 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.019983 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.033332 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.046507 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.062534 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.064339 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.064385 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.064394 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.064408 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.064418 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.075493 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.090113 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.111523 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.126621 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.169935 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.169992 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.170003 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.170022 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.170039 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.272152 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.272196 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.272208 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.272228 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.272240 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.375467 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.375923 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.375934 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.375948 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.375956 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.479384 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.479464 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.479490 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.479527 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.479553 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.582683 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.582781 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.582801 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.582828 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.582845 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.612590 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:11:49.717577402 +0000 UTC Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.645056 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:34 crc kubenswrapper[5049]: E0127 16:57:34.645496 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.686075 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.686154 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.686170 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.686195 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.686208 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.789267 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.789314 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.789348 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.789369 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.789383 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.892086 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.892129 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.892139 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.892154 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.892164 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.897468 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.897766 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.897806 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.903202 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" event={"ID":"63d094db-b027-49de-8ac0-427f5cd179e6","Type":"ContainerStarted","Data":"470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.915271 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.930431 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.945391 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.960661 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.969086 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.969651 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.974884 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.995377 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.995415 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.995423 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.995439 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.995450 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:34Z","lastTransitionTime":"2026-01-27T16:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:34 crc kubenswrapper[5049]: I0127 16:57:34.996327 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.006751 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.019310 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.030309 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.042333 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.057384 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.073086 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.086300 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.097524 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.097587 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.097601 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.097621 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.097634 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.102802 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.115334 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.126593 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.140712 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.160378 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.173940 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.194948 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.199353 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.199389 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.199399 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.199418 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.199430 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.211626 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.227308 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.236778 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.248644 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.263373 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.281594 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.300949 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.302520 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.302554 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.302564 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.302582 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.302592 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.314866 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.405584 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.405638 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.405652 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.405731 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.405754 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.406659 5049 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.508521 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.508584 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.508596 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.508633 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.508647 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.611870 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.611933 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.611949 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.611974 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.611992 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.613243 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:42:55.522995439 +0000 UTC Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.645774 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.645785 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:35 crc kubenswrapper[5049]: E0127 16:57:35.646001 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:35 crc kubenswrapper[5049]: E0127 16:57:35.646136 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.665451 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.683044 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.702015 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.714981 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.715014 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.715022 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.715036 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.715044 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.717642 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.735190 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.748191 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.760249 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.780855 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.792625 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.806535 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.817423 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.817458 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.817468 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.817483 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.817495 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.818707 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.838079 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.847984 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.861575 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.906504 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.920372 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.920432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.920446 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.920462 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:35 crc kubenswrapper[5049]: I0127 16:57:35.920472 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:35Z","lastTransitionTime":"2026-01-27T16:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.023442 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.023782 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.023871 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.023944 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.024006 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.127082 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.127379 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.127456 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.127533 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.127601 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.230719 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.230783 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.230808 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.230835 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.230848 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.338221 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.338268 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.338279 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.338296 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.338309 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.441959 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.442013 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.442033 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.442060 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.442081 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.548123 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.548167 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.548179 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.548201 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.548215 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.614059 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:56:22.713363282 +0000 UTC Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.645496 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:36 crc kubenswrapper[5049]: E0127 16:57:36.646104 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.651384 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.651407 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.651434 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.651448 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.651460 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.754031 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.754106 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.754118 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.754137 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.754154 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.856862 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.856907 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.856920 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.856940 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.856951 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.909818 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.959958 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.960002 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.960012 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.960029 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:36 crc kubenswrapper[5049]: I0127 16:57:36.960042 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:36Z","lastTransitionTime":"2026-01-27T16:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.063423 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.063469 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.063481 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.063503 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.063516 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.175487 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.175569 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.175585 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.175611 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.175632 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.278604 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.278663 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.278682 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.278698 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.278709 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.381372 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.381440 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.381459 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.381494 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.381514 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.484608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.484665 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.484677 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.484716 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.484731 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.588487 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.588538 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.588551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.588571 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.588584 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.614708 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:11:23.080449423 +0000 UTC Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.646143 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.646152 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:37 crc kubenswrapper[5049]: E0127 16:57:37.646350 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:37 crc kubenswrapper[5049]: E0127 16:57:37.646529 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.691823 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.691877 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.691891 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.691921 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.691935 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.794833 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.794882 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.794893 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.794913 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.794929 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.897595 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.897642 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.897653 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.897697 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:37 crc kubenswrapper[5049]: I0127 16:57:37.897710 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:37Z","lastTransitionTime":"2026-01-27T16:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.001364 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.001420 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.001431 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.001452 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.001467 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.104817 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.104899 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.104909 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.104932 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.104942 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.208310 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.208376 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.208390 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.208413 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.208426 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.311274 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.311771 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.311922 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.312093 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.312273 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.415585 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.416029 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.416105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.416180 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.416237 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.519932 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.519985 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.519997 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.520023 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.520044 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.615740 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:35:01.4852652 +0000 UTC Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.623049 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.623368 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.623451 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.623563 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.623646 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.645502 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:38 crc kubenswrapper[5049]: E0127 16:57:38.645705 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.726281 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.726884 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.726930 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.726958 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.726976 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.830812 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.830861 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.830871 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.830888 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.830899 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.919912 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/0.log" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.923724 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033" exitCode=1 Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.923746 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.925217 5049 scope.go:117] "RemoveContainer" containerID="610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.933246 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.933312 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.933334 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.933366 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.933391 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:38Z","lastTransitionTime":"2026-01-27T16:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.944802 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.964036 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:38 crc kubenswrapper[5049]: I0127 16:57:38.982749 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:38Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.006550 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:37Z\\\",\\\"message\\\":\\\"l\\\\nI0127 16:57:37.775279 6396 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:57:37.775307 6396 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 16:57:37.775325 6396 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 16:57:37.775332 6396 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 16:57:37.775362 6396 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:57:37.775376 6396 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 16:57:37.775382 6396 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:57:37.775388 6396 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 16:57:37.775395 6396 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 16:57:37.777323 6396 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:57:37.777355 6396 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 16:57:37.777379 6396 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 16:57:37.777394 6396 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:57:37.777410 6396 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 16:57:37.777435 6396 factory.go:656] Stopping watch factory\\\\nI0127 16:57:37.777459 6396 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.019922 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.036114 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.036210 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.036231 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.036286 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.036307 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.043098 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.062360 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.082727 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.103119 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.119992 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.135622 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.139034 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.139088 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.139104 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.139123 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.139133 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.149181 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.165814 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.177285 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.241276 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.241311 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.241320 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.241333 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.241342 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.344555 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.344608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.344619 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.344638 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.344650 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.449159 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.449210 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.449223 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.449242 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.449607 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.554639 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.554747 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.554767 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.554801 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.554823 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.616391 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:56:37.459132588 +0000 UTC Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.647337 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:39 crc kubenswrapper[5049]: E0127 16:57:39.647586 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.647935 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:39 crc kubenswrapper[5049]: E0127 16:57:39.648179 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.657147 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.657191 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.657201 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.657218 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.657229 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.761657 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.761715 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.761728 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.761746 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.761758 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.864799 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.864874 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.864893 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.864919 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.864934 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.935982 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/0.log" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.939515 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.939700 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.961544 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.967950 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.968001 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.968018 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.968044 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.968065 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:39Z","lastTransitionTime":"2026-01-27T16:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:39 crc kubenswrapper[5049]: I0127 16:57:39.979562 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:39Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.019830 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.063101 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:37Z\\\",\\\"message\\\":\\\"l\\\\nI0127 16:57:37.775279 6396 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:57:37.775307 6396 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 16:57:37.775325 6396 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 16:57:37.775332 6396 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 16:57:37.775362 6396 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:57:37.775376 6396 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 16:57:37.775382 6396 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:57:37.775388 6396 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 16:57:37.775395 6396 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 16:57:37.777323 6396 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:57:37.777355 6396 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 16:57:37.777379 6396 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 16:57:37.777394 6396 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:57:37.777410 6396 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 16:57:37.777435 6396 factory.go:656] Stopping watch factory\\\\nI0127 16:57:37.777459 6396 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.070844 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.070910 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.070920 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.070938 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.070954 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.083172 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.097351 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.118788 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.132701 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.148801 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.161911 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.174724 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.174791 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.174807 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.174829 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.174850 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.186013 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.203378 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.219212 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.236391 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.277347 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.277389 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.277400 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.277418 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.277431 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.380033 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.380083 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.380094 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.380116 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.380128 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.483582 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.483623 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.483632 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.483651 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.483662 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.586916 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.586968 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.586981 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.587036 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.587049 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.617638 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:58:01.56637227 +0000 UTC Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.645197 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:40 crc kubenswrapper[5049]: E0127 16:57:40.645388 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.690168 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.690223 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.690235 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.690254 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.690267 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.793897 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.793972 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.793984 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.794006 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.794022 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.897425 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.898008 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.898176 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.898339 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.898489 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:40Z","lastTransitionTime":"2026-01-27T16:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.947025 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/1.log" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.947914 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/0.log" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.955532 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df" exitCode=1 Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.955619 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df"} Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.955708 5049 scope.go:117] "RemoveContainer" containerID="610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.956751 5049 scope.go:117] "RemoveContainer" containerID="48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df" Jan 27 16:57:40 crc kubenswrapper[5049]: E0127 16:57:40.956979 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.958811 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9"] Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.959542 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.963289 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.964024 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 16:57:40 crc kubenswrapper[5049]: I0127 16:57:40.980336 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:40Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.001443 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.001892 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.002086 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.002252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.002405 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.004192 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.019201 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jsf7\" (UniqueName: \"kubernetes.io/projected/0683e0b9-a15b-4b54-a165-1073c0494cf7-kube-api-access-7jsf7\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.019540 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0683e0b9-a15b-4b54-a165-1073c0494cf7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.019885 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0683e0b9-a15b-4b54-a165-1073c0494cf7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.020128 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0683e0b9-a15b-4b54-a165-1073c0494cf7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.024650 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.045888 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:37Z\\\",\\\"message\\\":\\\"l\\\\nI0127 16:57:37.775279 6396 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:57:37.775307 6396 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 16:57:37.775325 6396 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 16:57:37.775332 6396 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 16:57:37.775362 6396 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:57:37.775376 6396 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 16:57:37.775382 6396 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:57:37.775388 6396 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 16:57:37.775395 6396 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 16:57:37.777323 6396 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:57:37.777355 6396 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 16:57:37.777379 6396 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 16:57:37.777394 6396 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:57:37.777410 6396 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 16:57:37.777435 6396 factory.go:656] Stopping watch factory\\\\nI0127 16:57:37.777459 6396 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.060069 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.075624 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.094514 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.105488 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.105580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.105608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.105643 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.105674 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.110834 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.121917 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0683e0b9-a15b-4b54-a165-1073c0494cf7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.122205 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0683e0b9-a15b-4b54-a165-1073c0494cf7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.122381 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jsf7\" (UniqueName: \"kubernetes.io/projected/0683e0b9-a15b-4b54-a165-1073c0494cf7-kube-api-access-7jsf7\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.122535 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0683e0b9-a15b-4b54-a165-1073c0494cf7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.123077 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0683e0b9-a15b-4b54-a165-1073c0494cf7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.123976 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0683e0b9-a15b-4b54-a165-1073c0494cf7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.131021 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.131989 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0683e0b9-a15b-4b54-a165-1073c0494cf7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.145897 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jsf7\" (UniqueName: \"kubernetes.io/projected/0683e0b9-a15b-4b54-a165-1073c0494cf7-kube-api-access-7jsf7\") pod \"ovnkube-control-plane-749d76644c-q27t9\" (UID: \"0683e0b9-a15b-4b54-a165-1073c0494cf7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.148203 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.163901 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.178735 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.190652 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.204378 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.209084 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.209129 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.209142 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.209345 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.209359 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.217430 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.238545 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:37Z\\\",\\\"message\\\":\\\"l\\\\nI0127 16:57:37.775279 6396 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:57:37.775307 6396 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 16:57:37.775325 6396 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 16:57:37.775332 6396 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 16:57:37.775362 6396 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:57:37.775376 6396 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 16:57:37.775382 6396 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:57:37.775388 6396 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 16:57:37.775395 6396 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 16:57:37.777323 6396 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:57:37.777355 6396 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 16:57:37.777379 6396 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 16:57:37.777394 6396 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:57:37.777410 6396 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 16:57:37.777435 6396 factory.go:656] Stopping watch factory\\\\nI0127 16:57:37.777459 6396 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.255035 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.275270 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.281816 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.293193 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: W0127 16:57:41.300736 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0683e0b9_a15b_4b54_a165_1073c0494cf7.slice/crio-b5f0577d02bf6ca877cb19f38a627689581d0492ac8817f1fb2514721a3ef62e WatchSource:0}: Error finding container b5f0577d02bf6ca877cb19f38a627689581d0492ac8817f1fb2514721a3ef62e: Status 404 returned error can't find the container with id b5f0577d02bf6ca877cb19f38a627689581d0492ac8817f1fb2514721a3ef62e Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.310084 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.313805 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.313856 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.313877 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.313906 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.313924 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.324605 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.324745 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.324802 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:57:57.324776241 +0000 UTC m=+52.423749810 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.324838 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.324844 5049 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.324914 5049 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.324946 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:57.324937305 +0000 UTC m=+52.423910864 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.324982 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:57.324968166 +0000 UTC m=+52.423941715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.330555 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.345880 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.363812 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.382938 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.400364 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.417400 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.417435 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.417444 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.417458 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.417467 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.418224 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.425397 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.425485 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.425738 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.425763 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.425779 5049 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.425838 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:57.425818726 +0000 UTC m=+52.524792385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.426403 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.426450 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.426462 5049 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.426522 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:57.426506492 +0000 UTC m=+52.525480041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.435454 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.449017 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.461626 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.520342 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.520404 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.520421 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.520445 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.520460 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.618889 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:49:53.33219133 +0000 UTC Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.622967 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.622995 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.623003 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.623017 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.623027 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.645012 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.645046 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.645195 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.645306 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.725259 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lv4sx"] Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.725773 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.725844 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.726514 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.726559 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.726572 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.726594 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.726606 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.739360 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.750835 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.768611 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.787728 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.811375 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.825531 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.828636 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxkx\" (UniqueName: \"kubernetes.io/projected/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-kube-api-access-nfxkx\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.828849 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.829588 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.829628 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.829645 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.829702 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.829720 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.848147 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:37Z\\\",\\\"message\\\":\\\"l\\\\nI0127 16:57:37.775279 6396 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:57:37.775307 6396 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 16:57:37.775325 6396 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 16:57:37.775332 6396 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 16:57:37.775362 6396 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:57:37.775376 6396 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 16:57:37.775382 6396 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:57:37.775388 6396 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 16:57:37.775395 6396 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 16:57:37.777323 6396 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:57:37.777355 6396 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 16:57:37.777379 6396 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 16:57:37.777394 6396 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:57:37.777410 6396 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 16:57:37.777435 6396 factory.go:656] Stopping watch factory\\\\nI0127 16:57:37.777459 6396 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.864890 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.878351 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.892732 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.906140 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.917727 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.929752 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.929834 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxkx\" (UniqueName: \"kubernetes.io/projected/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-kube-api-access-nfxkx\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.929994 5049 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: E0127 16:57:41.930110 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs podName:d48a67e1-cecf-41d6-a42c-52bdcd3ab892 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:42.430057885 +0000 UTC m=+37.529031434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs") pod "network-metrics-daemon-lv4sx" (UID: "d48a67e1-cecf-41d6-a42c-52bdcd3ab892") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.931928 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.931975 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.931993 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.932017 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.932032 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:41Z","lastTransitionTime":"2026-01-27T16:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.933950 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.947471 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxkx\" (UniqueName: \"kubernetes.io/projected/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-kube-api-access-nfxkx\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.952490 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.961077 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/1.log" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.965006 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" event={"ID":"0683e0b9-a15b-4b54-a165-1073c0494cf7","Type":"ContainerStarted","Data":"a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.965039 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" event={"ID":"0683e0b9-a15b-4b54-a165-1073c0494cf7","Type":"ContainerStarted","Data":"1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.965059 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" event={"ID":"0683e0b9-a15b-4b54-a165-1073c0494cf7","Type":"ContainerStarted","Data":"b5f0577d02bf6ca877cb19f38a627689581d0492ac8817f1fb2514721a3ef62e"} Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.979109 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:41 crc kubenswrapper[5049]: I0127 16:57:41.991600 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:41Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.009975 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:37Z\\\",\\\"message\\\":\\\"l\\\\nI0127 16:57:37.775279 6396 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:57:37.775307 6396 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 16:57:37.775325 6396 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 16:57:37.775332 6396 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 16:57:37.775362 6396 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:57:37.775376 6396 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 16:57:37.775382 6396 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:57:37.775388 6396 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 16:57:37.775395 6396 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 16:57:37.777323 6396 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:57:37.777355 6396 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 16:57:37.777379 6396 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 16:57:37.777394 6396 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:57:37.777410 6396 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 16:57:37.777435 6396 factory.go:656] Stopping watch factory\\\\nI0127 16:57:37.777459 6396 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.024404 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.034346 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.034385 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.034397 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.034416 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.034430 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.036024 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.050020 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.062854 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.077924 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.089181 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.100635 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.115174 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.127247 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.137113 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.137551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.137639 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.137763 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.137863 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.139411 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.157700 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.175785 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.193761 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.211483 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.226464 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.227707 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.227756 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.227770 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.227793 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.227807 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.243156 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.247269 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.247399 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.247493 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.247588 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.247688 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.263925 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.267031 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.267232 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.267319 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.267419 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.267525 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.282839 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.287018 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.287142 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.287294 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.287401 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.287501 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.303235 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.307252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.307404 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.307508 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.307599 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.307721 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.324110 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:42Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.324366 5049 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.326130 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.326191 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.326207 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.326227 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.326240 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.429313 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.429356 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.429367 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.429384 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.429402 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.434825 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.435021 5049 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.435114 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs podName:d48a67e1-cecf-41d6-a42c-52bdcd3ab892 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:43.435088193 +0000 UTC m=+38.534061862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs") pod "network-metrics-daemon-lv4sx" (UID: "d48a67e1-cecf-41d6-a42c-52bdcd3ab892") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.531977 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.532036 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.532053 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.532081 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.532105 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.619112 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:58:13.896774944 +0000 UTC Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.635710 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.635787 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.635799 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.635825 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.635838 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.644997 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:42 crc kubenswrapper[5049]: E0127 16:57:42.645198 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.738504 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.738565 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.738578 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.738598 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.738612 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.841290 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.841372 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.841392 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.841421 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.841442 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.944129 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.944203 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.944218 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.944244 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:42 crc kubenswrapper[5049]: I0127 16:57:42.944261 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:42Z","lastTransitionTime":"2026-01-27T16:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.047301 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.047397 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.047432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.047465 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.047488 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.150749 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.150831 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.150870 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.150894 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.150907 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.254605 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.254759 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.254789 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.254825 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.254849 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.359516 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.359579 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.359593 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.359617 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.359631 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.447381 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:43 crc kubenswrapper[5049]: E0127 16:57:43.447545 5049 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:43 crc kubenswrapper[5049]: E0127 16:57:43.447602 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs podName:d48a67e1-cecf-41d6-a42c-52bdcd3ab892 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:45.447586754 +0000 UTC m=+40.546560303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs") pod "network-metrics-daemon-lv4sx" (UID: "d48a67e1-cecf-41d6-a42c-52bdcd3ab892") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.462619 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.462656 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.462664 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.462714 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.462725 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.565109 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.565154 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.565165 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.565181 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.565193 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.619932 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:06:41.10351953 +0000 UTC Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.645604 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.645862 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:43 crc kubenswrapper[5049]: E0127 16:57:43.645863 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.645856 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:43 crc kubenswrapper[5049]: E0127 16:57:43.645968 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:43 crc kubenswrapper[5049]: E0127 16:57:43.646206 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.667920 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.667981 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.668002 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.668026 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.668040 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.771050 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.771105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.771120 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.771140 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.771154 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.874085 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.874153 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.874171 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.874198 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.874215 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.977811 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.977887 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.977906 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.977932 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:43 crc kubenswrapper[5049]: I0127 16:57:43.977952 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:43Z","lastTransitionTime":"2026-01-27T16:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.081003 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.081057 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.081070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.081094 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.081107 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.185130 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.185231 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.185252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.185282 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.185307 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.289323 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.289412 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.289437 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.289469 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.289492 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.393105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.393184 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.393210 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.393242 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.393264 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.496980 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.497038 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.497055 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.497077 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.497094 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.600071 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.600152 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.600174 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.600207 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.600229 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.620348 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:20:52.674752227 +0000 UTC Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.645206 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:44 crc kubenswrapper[5049]: E0127 16:57:44.645393 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.647227 5049 scope.go:117] "RemoveContainer" containerID="db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.703124 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.703770 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.703794 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.703821 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.703839 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.806496 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.806536 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.806545 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.806560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.806570 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.909486 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.909551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.909563 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.909578 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.909587 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:44Z","lastTransitionTime":"2026-01-27T16:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.978951 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.981509 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b"} Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.981890 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:57:44 crc kubenswrapper[5049]: I0127 16:57:44.997461 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:44Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.010312 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.012601 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.012643 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.012654 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.012690 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.012704 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.030344 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.043337 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.055085 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.068732 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.088081 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.101621 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.115364 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.115420 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.115433 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.115455 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.115469 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.119914 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.132565 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.151450 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:37Z\\\",\\\"message\\\":\\\"l\\\\nI0127 16:57:37.775279 6396 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:57:37.775307 6396 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 16:57:37.775325 6396 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 16:57:37.775332 6396 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 16:57:37.775362 6396 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:57:37.775376 6396 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 16:57:37.775382 6396 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:57:37.775388 6396 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 16:57:37.775395 6396 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 16:57:37.777323 6396 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:57:37.777355 6396 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 16:57:37.777379 6396 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 16:57:37.777394 6396 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:57:37.777410 6396 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 16:57:37.777435 6396 factory.go:656] Stopping watch factory\\\\nI0127 16:57:37.777459 6396 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.166097 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.179142 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.193020 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.209702 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.217496 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.217557 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.217575 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.217597 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.217615 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.223120 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.320593 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.320940 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.320958 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.320979 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.320991 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.424251 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.424319 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.424342 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.424370 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.424388 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.469764 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:45 crc kubenswrapper[5049]: E0127 16:57:45.469991 5049 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:45 crc kubenswrapper[5049]: E0127 16:57:45.470081 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs podName:d48a67e1-cecf-41d6-a42c-52bdcd3ab892 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:49.470058211 +0000 UTC m=+44.569031780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs") pod "network-metrics-daemon-lv4sx" (UID: "d48a67e1-cecf-41d6-a42c-52bdcd3ab892") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.527477 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.527521 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.527533 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.527554 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.527568 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.621102 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:15:14.093818675 +0000 UTC Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.630446 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.630491 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.630500 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.630520 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.630535 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.645986 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.646053 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.646093 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:45 crc kubenswrapper[5049]: E0127 16:57:45.646189 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:45 crc kubenswrapper[5049]: E0127 16:57:45.646351 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:45 crc kubenswrapper[5049]: E0127 16:57:45.646488 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.671925 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.684967 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.695739 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.708265 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.722586 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.732613 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.732665 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.732704 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.732726 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.732743 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.743000 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.757895 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.776081 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.793966 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.805874 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.834617 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.834797 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.834856 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.834871 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.834889 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.834904 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.867096 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.894054 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://610e5a8b7c495db49936614fbfa35d159a28102a15d5dcaf901ad8fcf74f4033\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:37Z\\\",\\\"message\\\":\\\"l\\\\nI0127 16:57:37.775279 6396 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:57:37.775307 6396 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 16:57:37.775325 6396 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0127 16:57:37.775332 6396 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0127 16:57:37.775362 6396 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:57:37.775376 6396 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 16:57:37.775382 6396 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:57:37.775388 6396 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 16:57:37.775395 6396 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0127 16:57:37.777323 6396 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:57:37.777355 6396 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0127 16:57:37.777379 6396 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 16:57:37.777394 6396 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:57:37.777410 6396 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 16:57:37.777435 6396 factory.go:656] Stopping watch factory\\\\nI0127 16:57:37.777459 6396 ovnkube.go:599] Stopped ovnkube\\\\nI0127 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.915811 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.932428 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.937945 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.937989 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.937998 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.938014 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.938024 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:45Z","lastTransitionTime":"2026-01-27T16:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:45 crc kubenswrapper[5049]: I0127 16:57:45.946283 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.041495 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.041549 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.041564 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.041591 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.041605 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.145011 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.145127 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.145181 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.145211 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.145230 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.248454 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.248522 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.248540 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.248570 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.248589 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.352555 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.352626 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.352645 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.352701 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.352722 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.456754 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.456822 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.456841 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.456869 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.456890 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.569100 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.569181 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.569206 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.569241 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.569267 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.621843 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:42:02.337340477 +0000 UTC Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.645456 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:46 crc kubenswrapper[5049]: E0127 16:57:46.645762 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.673080 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.673134 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.673151 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.673174 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.673193 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.776601 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.776707 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.776735 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.776770 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.776791 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.880429 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.880487 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.880501 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.880519 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.880535 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.984573 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.984627 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.984654 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.984690 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:46 crc kubenswrapper[5049]: I0127 16:57:46.984704 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:46Z","lastTransitionTime":"2026-01-27T16:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.088950 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.089078 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.089106 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.089143 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.089168 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.193099 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.193202 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.193222 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.193257 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.193279 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.297140 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.297215 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.297235 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.297265 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.297308 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.401176 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.401243 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.401262 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.401294 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.401318 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.504801 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.504894 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.504919 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.504950 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.504969 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.607829 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.607922 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.607943 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.607977 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.608000 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.622920 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:52:55.858491647 +0000 UTC Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.645536 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.645536 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:47 crc kubenswrapper[5049]: E0127 16:57:47.645823 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.645668 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:47 crc kubenswrapper[5049]: E0127 16:57:47.645947 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:47 crc kubenswrapper[5049]: E0127 16:57:47.645969 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.711507 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.711551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.711563 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.711586 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.711598 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.814764 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.814823 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.814845 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.814870 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.814888 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.917294 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.917346 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.917365 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.917388 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:47 crc kubenswrapper[5049]: I0127 16:57:47.917405 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:47Z","lastTransitionTime":"2026-01-27T16:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.020470 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.020543 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.020567 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.020600 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.020624 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.124213 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.124283 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.124301 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.124326 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.124341 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.227363 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.227412 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.227427 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.227448 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.227462 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.331830 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.331936 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.331991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.332026 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.332045 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.435034 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.435097 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.435110 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.435130 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.435143 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.539644 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.539750 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.539768 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.539798 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.539820 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.623911 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:31:44.673569085 +0000 UTC Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.644942 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.645043 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.645096 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.645114 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.645143 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: E0127 16:57:48.645148 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.645164 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.749454 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.749614 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.749642 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.749702 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.749727 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.857981 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.858277 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.858296 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.858325 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.858513 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.962070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.962129 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.962142 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.962162 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:48 crc kubenswrapper[5049]: I0127 16:57:48.962175 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:48Z","lastTransitionTime":"2026-01-27T16:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.024179 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.025878 5049 scope.go:117] "RemoveContainer" containerID="48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df" Jan 27 16:57:49 crc kubenswrapper[5049]: E0127 16:57:49.027222 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.045194 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.065357 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.065394 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.065404 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.065421 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.065431 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.081056 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.101512 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.123893 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.144693 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.165190 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.168982 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.169074 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.169092 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.169142 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.169162 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.184428 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.204912 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.224335 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.250929 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.270054 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.271723 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.271763 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.271774 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.271798 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.271810 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.285550 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.304789 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.319801 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.334216 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.352344 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:49Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.374625 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.374688 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.374701 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.374721 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.374736 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.477927 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.477991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.478005 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.478030 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.478051 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.525211 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:49 crc kubenswrapper[5049]: E0127 16:57:49.525563 5049 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:49 crc kubenswrapper[5049]: E0127 16:57:49.525731 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs podName:d48a67e1-cecf-41d6-a42c-52bdcd3ab892 nodeName:}" failed. No retries permitted until 2026-01-27 16:57:57.525699893 +0000 UTC m=+52.624673482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs") pod "network-metrics-daemon-lv4sx" (UID: "d48a67e1-cecf-41d6-a42c-52bdcd3ab892") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.581490 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.581555 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.581589 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.581615 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.581633 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.624542 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:22:47.324092289 +0000 UTC Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.645376 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.645392 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.645376 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:49 crc kubenswrapper[5049]: E0127 16:57:49.645547 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:49 crc kubenswrapper[5049]: E0127 16:57:49.645741 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:49 crc kubenswrapper[5049]: E0127 16:57:49.645899 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.684394 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.684492 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.684521 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.684554 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.684577 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.787301 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.787360 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.787372 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.787395 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.787410 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.890780 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.891223 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.891365 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.891490 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.891623 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.995533 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.996826 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.996879 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.996912 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:49 crc kubenswrapper[5049]: I0127 16:57:49.996935 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:49Z","lastTransitionTime":"2026-01-27T16:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.100247 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.100343 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.100367 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.100400 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.100419 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.204116 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.204181 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.204204 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.204245 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.204274 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.307596 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.307778 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.307803 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.307832 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.307853 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.410960 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.411020 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.411032 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.411058 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.411074 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.514443 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.514847 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.515057 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.515192 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.515321 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.618312 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.618354 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.618363 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.618384 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.618395 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.625543 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:42:11.879505035 +0000 UTC Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.645001 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:50 crc kubenswrapper[5049]: E0127 16:57:50.645150 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.721339 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.721403 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.721440 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.721478 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.721501 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.825508 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.825906 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.825992 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.826082 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.826175 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.929918 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.930189 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.930252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.930329 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:50 crc kubenswrapper[5049]: I0127 16:57:50.930441 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:50Z","lastTransitionTime":"2026-01-27T16:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.033597 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.034009 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.034135 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.034315 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.034507 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.142326 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.142386 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.142409 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.142439 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.142459 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.246323 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.246437 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.246460 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.246491 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.246510 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.349487 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.349856 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.350000 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.350174 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.350310 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.453606 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.454125 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.454353 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.454600 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.454849 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.558823 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.559260 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.559478 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.559781 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.560092 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.626792 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:11:42.327281787 +0000 UTC Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.645198 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.645470 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.645268 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:51 crc kubenswrapper[5049]: E0127 16:57:51.646060 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:51 crc kubenswrapper[5049]: E0127 16:57:51.646225 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:51 crc kubenswrapper[5049]: E0127 16:57:51.646468 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.664551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.664800 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.664832 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.664869 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.664898 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.770707 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.771182 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.771337 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.771765 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.771969 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.875485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.875884 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.875991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.876099 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.876185 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.979087 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.979163 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.979182 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.979210 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:51 crc kubenswrapper[5049]: I0127 16:57:51.979231 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:51Z","lastTransitionTime":"2026-01-27T16:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.082600 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.082735 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.082755 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.082782 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.082799 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.186199 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.186257 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.186268 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.186287 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.186298 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.289961 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.290022 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.290043 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.290067 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.290084 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.329291 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.329338 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.329355 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.329379 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.329396 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: E0127 16:57:52.351794 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:52Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.360858 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.360945 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.360974 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.361011 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.361046 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: E0127 16:57:52.384967 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:52Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.391251 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.391305 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.391321 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.391347 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.391366 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: E0127 16:57:52.412326 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:52Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.417956 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.418016 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.418036 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.418060 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.418078 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: E0127 16:57:52.439890 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:52Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.445057 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.445115 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.445131 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.445154 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.445172 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: E0127 16:57:52.465221 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:52Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:52 crc kubenswrapper[5049]: E0127 16:57:52.465443 5049 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.467804 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.467885 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.467908 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.467939 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.467965 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.571105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.571170 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.571188 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.571212 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.571230 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.628299 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:18:11.630620517 +0000 UTC Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.646034 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:52 crc kubenswrapper[5049]: E0127 16:57:52.646225 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.674356 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.674424 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.674445 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.674472 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.674491 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.777202 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.777268 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.777286 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.777318 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.777342 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.880742 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.880867 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.880947 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.880991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.881123 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.981450 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.983861 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.983930 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.983953 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.983983 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.984003 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:52Z","lastTransitionTime":"2026-01-27T16:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:52 crc kubenswrapper[5049]: I0127 16:57:52.995284 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.006227 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.026909 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.048925 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.071445 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.088105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.088181 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.088204 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.088235 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.088259 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.098407 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.116603 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.133399 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.152419 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.166255 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.178029 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.191070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.191102 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.191114 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.191133 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.191146 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.191250 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.203235 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.236328 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.250359 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.262578 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.278053 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:53Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.294201 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.294237 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.294250 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.294271 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.294283 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.397379 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.397431 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.397451 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.397477 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.397495 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.500543 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.500597 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.500613 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.500636 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.500653 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.604282 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.604339 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.604356 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.604380 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.604397 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.629485 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:16:29.228926871 +0000 UTC Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.646190 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.646252 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:53 crc kubenswrapper[5049]: E0127 16:57:53.646345 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.646403 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:53 crc kubenswrapper[5049]: E0127 16:57:53.646504 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:53 crc kubenswrapper[5049]: E0127 16:57:53.646610 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.708059 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.708108 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.708126 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.708149 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.708167 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.811451 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.811519 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.811543 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.811573 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.811598 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.915050 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.915097 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.915113 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.915136 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:53 crc kubenswrapper[5049]: I0127 16:57:53.915155 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:53Z","lastTransitionTime":"2026-01-27T16:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.018513 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.018569 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.018587 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.018614 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.018633 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.121425 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.121506 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.121521 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.121545 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.121562 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.224891 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.224927 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.224936 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.224950 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.224959 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.328250 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.328360 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.328415 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.328448 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.328469 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.432051 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.432129 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.432141 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.432165 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.432178 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.534926 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.535040 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.535070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.535105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.535129 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.630473 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:21:31.053345252 +0000 UTC Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.637928 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.637965 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.637975 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.637990 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.638003 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.645652 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:54 crc kubenswrapper[5049]: E0127 16:57:54.645863 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.741138 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.741192 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.741218 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.741244 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.741261 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.844086 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.844139 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.844156 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.844179 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.844196 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.975622 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.975661 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.975697 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.975716 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:54 crc kubenswrapper[5049]: I0127 16:57:54.975728 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:54Z","lastTransitionTime":"2026-01-27T16:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.078812 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.078866 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.078879 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.078902 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.078916 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.182081 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.182122 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.182137 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.182158 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.182173 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.286563 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.286628 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.286641 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.286663 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.286699 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.390180 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.390247 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.390265 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.390288 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.390306 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.492645 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.493004 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.493110 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.493205 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.493295 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.597071 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.597485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.597583 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.597663 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.597779 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.631602 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:29:12.269869322 +0000 UTC Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.645234 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.645277 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.645453 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:55 crc kubenswrapper[5049]: E0127 16:57:55.645481 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:55 crc kubenswrapper[5049]: E0127 16:57:55.645611 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:55 crc kubenswrapper[5049]: E0127 16:57:55.645791 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.663376 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.680330 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.700018 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.700048 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.700057 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.700073 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.700084 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.704512 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.715888 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.730569 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.749819 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.767319 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.784370 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.799206 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.803772 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.803813 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.803831 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.803853 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.803870 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.820802 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.839999 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.859814 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.876699 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.894185 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.906428 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.906471 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.906481 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.906498 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.906511 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:55Z","lastTransitionTime":"2026-01-27T16:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.911667 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.926925 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:55 crc kubenswrapper[5049]: I0127 16:57:55.939852 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:57:55Z is after 2025-08-24T17:21:41Z" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.009512 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.009548 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.009558 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.009576 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.009587 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.112359 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.112441 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.112463 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.112493 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.112514 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.215726 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.215771 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.215787 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.215810 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.215827 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.318573 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.318615 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.318625 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.318642 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.318651 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.421703 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.421806 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.421837 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.421868 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.421892 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.524246 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.524305 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.524322 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.524347 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.524363 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.627049 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.627108 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.627119 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.627139 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.627155 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.632238 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:37:34.617822843 +0000 UTC Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.645658 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:56 crc kubenswrapper[5049]: E0127 16:57:56.645857 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.730456 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.730491 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.730502 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.730564 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.730577 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.834179 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.834226 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.834239 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.834263 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.834278 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.937894 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.937935 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.937947 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.937963 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:56 crc kubenswrapper[5049]: I0127 16:57:56.937977 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:56Z","lastTransitionTime":"2026-01-27T16:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.041631 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.041746 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.041769 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.041795 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.041814 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.144808 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.144874 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.144897 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.144923 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.144948 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.248470 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.248544 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.248567 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.248592 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.248613 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.352008 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.352073 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.352090 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.352114 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.352131 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.425266 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.425485 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.425554 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.425606 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:58:29.42556871 +0000 UTC m=+84.524542289 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.425655 5049 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.425745 5049 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.425829 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:58:29.425792826 +0000 UTC m=+84.524766405 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.425873 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:58:29.425853147 +0000 UTC m=+84.524826846 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.455550 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.455608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.455630 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.455667 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.455728 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.526533 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.526591 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.526624 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.526697 5049 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.526755 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.526775 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.526782 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs podName:d48a67e1-cecf-41d6-a42c-52bdcd3ab892 nodeName:}" failed. No retries permitted until 2026-01-27 16:58:13.526762648 +0000 UTC m=+68.625736207 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs") pod "network-metrics-daemon-lv4sx" (UID: "d48a67e1-cecf-41d6-a42c-52bdcd3ab892") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.526787 5049 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.526824 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 16:58:29.52681311 +0000 UTC m=+84.625786679 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.526947 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.527021 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.527048 5049 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.527167 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 16:58:29.527133667 +0000 UTC m=+84.626107286 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.558520 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.558581 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.558599 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.558626 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.558651 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.632377 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:06:43.44431285 +0000 UTC Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.645973 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.646045 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.646275 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.646311 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.646550 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:57 crc kubenswrapper[5049]: E0127 16:57:57.646818 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.661110 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.661202 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.661221 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.661276 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.661294 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.764560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.764623 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.764641 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.764666 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.764712 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.868113 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.868176 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.868193 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.868220 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.868238 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.977791 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.977851 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.977862 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.977887 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:57 crc kubenswrapper[5049]: I0127 16:57:57.977897 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:57Z","lastTransitionTime":"2026-01-27T16:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.081371 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.081443 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.081462 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.081488 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.081506 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.183759 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.183802 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.183815 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.183835 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.183846 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.288778 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.288809 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.288819 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.288835 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.288846 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.392247 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.392294 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.392308 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.392327 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.392342 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.496405 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.496473 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.496490 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.496512 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.496528 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.599940 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.600029 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.600054 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.600088 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.600109 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.632631 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:51:36.064370113 +0000 UTC Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.645538 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:57:58 crc kubenswrapper[5049]: E0127 16:57:58.645842 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.703774 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.703828 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.703847 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.703871 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.703888 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.807657 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.807719 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.807728 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.807745 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.807772 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.911640 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.911732 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.911752 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.911778 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:58 crc kubenswrapper[5049]: I0127 16:57:58.911797 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:58Z","lastTransitionTime":"2026-01-27T16:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.015067 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.015125 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.015143 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.015166 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.015184 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.119152 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.119229 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.119252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.119280 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.119300 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.222585 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.222647 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.222664 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.222728 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.222752 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.326949 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.327041 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.327140 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.327208 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.327228 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.430407 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.430470 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.430483 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.430508 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.430525 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.533547 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.533588 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.533600 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.533845 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.533933 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.632950 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:35:58.81954772 +0000 UTC Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.637396 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.637437 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.637446 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.637467 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.637479 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.645857 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:57:59 crc kubenswrapper[5049]: E0127 16:57:59.645952 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.646291 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.646372 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:57:59 crc kubenswrapper[5049]: E0127 16:57:59.646448 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:57:59 crc kubenswrapper[5049]: E0127 16:57:59.646533 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.741268 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.741377 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.741395 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.741455 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.741473 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.845410 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.845517 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.845536 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.845851 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.845912 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.949605 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.949704 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.949724 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.949752 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:57:59 crc kubenswrapper[5049]: I0127 16:57:59.949773 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:57:59Z","lastTransitionTime":"2026-01-27T16:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.052481 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.052544 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.052559 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.052580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.052594 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.156725 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.156819 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.156837 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.156865 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.156921 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.261900 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.261976 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.261999 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.262029 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.262057 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.365286 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.365349 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.365367 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.365390 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.365403 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.469098 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.469854 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.469891 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.469924 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.469942 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.563813 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.573634 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.573713 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.573736 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.573760 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.573778 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.583082 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.604419 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.631742 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.633751 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:42:02.201523847 +0000 UTC Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.645715 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:00 crc kubenswrapper[5049]: E0127 16:58:00.645886 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.653651 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.669794 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.683227 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.683280 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.683294 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.683322 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.683338 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.690815 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.708376 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.733585 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.750174 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.767800 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.783364 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.786886 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.786926 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.786937 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.786954 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.786966 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.812880 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.838242 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.857598 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.880084 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.890298 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.890345 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.890356 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.890374 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.890387 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.901858 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.922966 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:00Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.993745 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.993784 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.993797 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.993815 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:00 crc kubenswrapper[5049]: I0127 16:58:00.993829 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:00Z","lastTransitionTime":"2026-01-27T16:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.097454 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.097546 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.097567 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.097599 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.097618 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.202148 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.202207 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.202226 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.202255 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.202278 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.305427 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.305492 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.305511 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.305538 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.305555 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.409749 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.409817 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.409915 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.410007 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.410029 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.513611 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.513719 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.513749 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.513792 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.513818 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.617118 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.617199 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.617218 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.617247 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.617266 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.634234 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 02:03:21.26553899 +0000 UTC Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.646152 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.646206 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:01 crc kubenswrapper[5049]: E0127 16:58:01.646392 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.646424 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:01 crc kubenswrapper[5049]: E0127 16:58:01.646635 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:01 crc kubenswrapper[5049]: E0127 16:58:01.646847 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.722778 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.722893 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.722920 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.722952 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.722974 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.827503 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.827597 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.827614 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.827640 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.827657 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.932354 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.932445 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.932470 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.932503 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:01 crc kubenswrapper[5049]: I0127 16:58:01.932527 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:01Z","lastTransitionTime":"2026-01-27T16:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.036870 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.036956 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.036979 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.037009 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.037033 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.140634 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.140722 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.140743 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.140833 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.140853 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.244838 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.244894 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.244912 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.244939 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.244958 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.349031 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.349105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.349175 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.349209 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.349246 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.453317 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.453416 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.453439 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.453470 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.453490 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.523828 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.523891 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.523909 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.523934 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.523952 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: E0127 16:58:02.546205 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:02Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.552588 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.552655 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.552699 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.552722 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.552734 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: E0127 16:58:02.577381 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:02Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.583744 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.583867 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.583924 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.583956 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.583974 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: E0127 16:58:02.609523 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:02Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.615549 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.615615 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.615634 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.615664 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.615719 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.635315 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:30:00.817974401 +0000 UTC Jan 27 16:58:02 crc kubenswrapper[5049]: E0127 16:58:02.638247 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:02Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.644341 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.644407 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.644427 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.644458 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.644479 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.645256 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:02 crc kubenswrapper[5049]: E0127 16:58:02.645440 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:02 crc kubenswrapper[5049]: E0127 16:58:02.667051 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:02Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:02 crc kubenswrapper[5049]: E0127 16:58:02.667341 5049 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.669812 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.669867 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.669888 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.669915 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.669936 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.772942 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.773005 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.773023 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.773111 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.773133 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.876444 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.876502 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.876515 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.876541 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.876559 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.980083 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.980141 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.980161 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.980188 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:02 crc kubenswrapper[5049]: I0127 16:58:02.980207 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:02Z","lastTransitionTime":"2026-01-27T16:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.083711 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.083755 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.083769 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.083789 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.083803 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.186957 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.187019 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.187040 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.187069 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.187088 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.290953 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.291027 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.291049 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.291084 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.291109 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.395778 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.395849 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.395868 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.395894 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.395921 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.499793 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.499862 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.499887 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.499920 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.499943 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.604839 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.604906 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.604919 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.604941 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.604954 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.635582 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:36:48.910287383 +0000 UTC Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.646276 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.646336 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.646364 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:03 crc kubenswrapper[5049]: E0127 16:58:03.647128 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:03 crc kubenswrapper[5049]: E0127 16:58:03.647276 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:03 crc kubenswrapper[5049]: E0127 16:58:03.647379 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.647627 5049 scope.go:117] "RemoveContainer" containerID="48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.708568 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.708994 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.709012 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.709035 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.709054 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.811935 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.811991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.812010 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.812035 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.812052 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.914846 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.914886 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.914902 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.914922 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:03 crc kubenswrapper[5049]: I0127 16:58:03.914933 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:03Z","lastTransitionTime":"2026-01-27T16:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.018411 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.018525 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.018548 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.018583 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.018606 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.060781 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/1.log" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.065465 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.066440 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.087915 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.109530 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.121883 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.121970 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.121997 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.122041 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.122077 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.132404 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.150246 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.167826 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.195242 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.208959 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.225031 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.225096 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.225114 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.225140 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.225161 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.226203 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.244892 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.264288 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.283365 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.297626 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.315788 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.327937 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.328250 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.328404 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.328579 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.328849 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.343273 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.356995 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.381586 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.397537 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:04Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.432334 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.432648 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.432779 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.432877 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.432962 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.545204 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.545457 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.545535 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.545606 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.545749 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.636083 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:25:20.264803729 +0000 UTC Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.645461 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:04 crc kubenswrapper[5049]: E0127 16:58:04.645651 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.648212 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.648244 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.648255 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.648271 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.648285 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.751224 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.751570 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.751659 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.751805 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.751920 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.855255 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.855335 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.855360 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.855392 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.855416 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.959267 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.959328 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.959349 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.959373 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:04 crc kubenswrapper[5049]: I0127 16:58:04.959392 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:04Z","lastTransitionTime":"2026-01-27T16:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.063012 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.063067 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.063083 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.063104 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.063120 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.072137 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/2.log" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.073187 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/1.log" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.078261 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245" exitCode=1 Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.078336 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.078400 5049 scope.go:117] "RemoveContainer" containerID="48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.079547 5049 scope.go:117] "RemoveContainer" containerID="f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245" Jan 27 16:58:05 crc kubenswrapper[5049]: E0127 16:58:05.079843 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.095320 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.114005 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.129323 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.149164 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.165406 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.165451 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.165464 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.165483 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.165494 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.181364 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.197164 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.211631 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.230531 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.247253 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.263102 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.268227 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.268282 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.268301 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.268328 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.268346 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.276400 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.293259 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.308338 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.326113 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.343347 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.362640 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.371390 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.371434 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.371447 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.371466 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.371481 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.384808 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.475617 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.475713 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.475737 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.475762 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.475781 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.578777 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.578967 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.578983 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.579003 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.579044 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.636259 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:50:22.660502333 +0000 UTC Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.645831 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.645858 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.645927 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:05 crc kubenswrapper[5049]: E0127 16:58:05.646007 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:05 crc kubenswrapper[5049]: E0127 16:58:05.646133 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:05 crc kubenswrapper[5049]: E0127 16:58:05.646200 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.659276 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.677027 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.681776 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.681807 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.681819 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.681837 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.681848 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.692093 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.710440 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.727928 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.745588 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.778190 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://48b7cf8e79773edb1a2abe93405b4d1cbe9952abcebe76f9b87c5d1e820ba8df\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"message\\\":\\\"kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0127 16:57:40.507246 6535 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-authentication-operator/metrics]} name:Service_openshift-authentication-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.150:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {6ea1fd71-2b40-4361-92ee-3f1ab4ec7414}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 16:57:40.507603 6535 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.784235 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.784444 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.784574 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.784802 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.784967 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.794712 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.811020 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.828019 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.841111 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.858550 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.872784 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.888648 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.889776 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.889911 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.890015 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.890126 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.890248 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.904988 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.923144 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.945373 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:05Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.993518 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.993631 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.993653 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.993711 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:05 crc kubenswrapper[5049]: I0127 16:58:05.993736 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:05Z","lastTransitionTime":"2026-01-27T16:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.084588 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/2.log" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.089718 5049 scope.go:117] "RemoveContainer" containerID="f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245" Jan 27 16:58:06 crc kubenswrapper[5049]: E0127 16:58:06.089920 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.095762 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.095823 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.095837 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.095852 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.095864 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.109024 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.130273 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.145393 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.172105 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.187925 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.198712 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.198762 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.198778 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.198800 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.198818 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.200869 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.214991 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.232921 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.253478 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.267915 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.282825 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.299012 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.301754 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.301827 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.301844 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.301881 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.301902 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.317932 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.334511 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.350918 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.368447 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.383637 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:06Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.405211 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.405264 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.405279 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.405301 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.405315 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.509933 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.510020 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.510062 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.510175 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.510202 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.614825 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.614927 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.615000 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.615030 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.615048 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.637168 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:34:54.237022373 +0000 UTC Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.645587 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:06 crc kubenswrapper[5049]: E0127 16:58:06.645872 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.718719 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.718800 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.718815 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.718834 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.718860 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.822180 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.822243 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.822265 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.822294 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.822317 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.925787 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.925864 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.925887 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.925918 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:06 crc kubenswrapper[5049]: I0127 16:58:06.925939 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:06Z","lastTransitionTime":"2026-01-27T16:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.029570 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.029663 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.029740 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.029771 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.029794 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.133492 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.133574 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.133591 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.133622 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.133641 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.237202 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.237269 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.237286 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.237312 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.237330 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.341276 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.341331 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.341347 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.341372 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.341389 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.444976 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.445100 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.445118 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.445148 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.445172 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.548750 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.548874 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.548898 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.548931 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.548951 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.638033 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:33:22.036158397 +0000 UTC Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.645504 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:07 crc kubenswrapper[5049]: E0127 16:58:07.645738 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.646101 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.646236 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:07 crc kubenswrapper[5049]: E0127 16:58:07.646347 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:07 crc kubenswrapper[5049]: E0127 16:58:07.646429 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.655857 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.655905 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.655923 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.655947 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.655965 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.759471 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.759527 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.759539 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.759559 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.759570 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.862498 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.862531 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.862540 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.862556 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.862565 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.965984 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.966071 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.966097 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.966132 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:07 crc kubenswrapper[5049]: I0127 16:58:07.966156 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:07Z","lastTransitionTime":"2026-01-27T16:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.069727 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.070071 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.070094 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.070132 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.070145 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.173578 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.173650 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.173662 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.173755 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.173778 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.276312 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.276449 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.276520 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.276560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.276585 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.380335 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.380392 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.380446 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.380477 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.380505 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.484787 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.484851 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.484865 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.484954 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.484974 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.588095 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.588266 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.588294 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.588384 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.588405 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.638200 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:11:13.898194755 +0000 UTC Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.645558 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:08 crc kubenswrapper[5049]: E0127 16:58:08.645717 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.691214 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.691281 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.691339 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.691367 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.691384 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.795239 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.795450 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.795471 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.795501 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.795520 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.898548 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.898648 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.898732 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.898785 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:08 crc kubenswrapper[5049]: I0127 16:58:08.898830 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:08Z","lastTransitionTime":"2026-01-27T16:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.001917 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.001984 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.002002 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.002026 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.002044 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.105113 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.105184 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.105205 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.105234 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.105255 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.208870 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.208925 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.208943 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.208969 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.208987 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.311795 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.311871 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.311895 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.311926 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.311950 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.414973 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.415026 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.415042 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.415069 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.415087 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.518485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.518537 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.518547 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.518569 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.518592 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.621952 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.622017 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.622034 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.622061 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.622083 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.639258 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:57:50.946276908 +0000 UTC Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.645721 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:09 crc kubenswrapper[5049]: E0127 16:58:09.645953 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.646019 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:09 crc kubenswrapper[5049]: E0127 16:58:09.646131 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.645990 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:09 crc kubenswrapper[5049]: E0127 16:58:09.646254 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.725637 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.725757 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.725778 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.725817 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.725835 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.829326 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.829398 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.829411 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.829438 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.829454 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.932622 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.932726 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.932747 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.932776 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:09 crc kubenswrapper[5049]: I0127 16:58:09.932797 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:09Z","lastTransitionTime":"2026-01-27T16:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.036177 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.036236 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.036248 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.036269 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.036284 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.139504 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.139563 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.139606 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.139626 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.139642 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.242629 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.242742 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.242765 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.242795 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.242814 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.345356 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.345411 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.345420 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.345444 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.345458 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.448362 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.448401 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.448413 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.448430 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.448443 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.552051 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.552103 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.552115 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.552140 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.552153 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.639830 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:12:50.51153256 +0000 UTC Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.645159 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:10 crc kubenswrapper[5049]: E0127 16:58:10.645321 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.655563 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.655604 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.655614 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.655633 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.655646 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.757496 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.757551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.757563 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.757587 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.757601 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.860610 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.860659 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.860687 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.860705 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.860717 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.962942 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.963006 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.963023 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.963047 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:10 crc kubenswrapper[5049]: I0127 16:58:10.963062 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:10Z","lastTransitionTime":"2026-01-27T16:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.066305 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.066391 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.066413 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.066443 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.066461 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.169243 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.169320 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.169347 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.169379 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.169403 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.273149 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.273224 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.273249 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.273285 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.273309 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.377065 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.377125 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.377138 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.377157 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.377168 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.480779 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.480860 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.480877 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.480908 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.480929 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.583403 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.583450 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.583462 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.583482 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.583495 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.640640 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:10:48.043462415 +0000 UTC Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.645122 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.645208 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.645122 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:11 crc kubenswrapper[5049]: E0127 16:58:11.645359 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:11 crc kubenswrapper[5049]: E0127 16:58:11.645513 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:11 crc kubenswrapper[5049]: E0127 16:58:11.645776 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.686364 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.686425 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.686447 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.686473 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.686495 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.789518 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.789590 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.789614 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.789650 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.789714 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.893221 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.893271 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.893283 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.893304 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.893320 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.996361 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.996424 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.996439 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.996460 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:11 crc kubenswrapper[5049]: I0127 16:58:11.996474 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:11Z","lastTransitionTime":"2026-01-27T16:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.099443 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.099482 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.099494 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.099511 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.099523 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.202087 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.202134 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.202145 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.202166 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.202180 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.305344 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.305398 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.305408 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.305425 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.305434 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.407993 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.408052 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.408061 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.408078 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.408089 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.511277 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.511333 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.511352 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.511377 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.511395 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.614520 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.614568 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.614578 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.614595 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.614603 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.641221 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:04:49.697536276 +0000 UTC Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.646945 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:12 crc kubenswrapper[5049]: E0127 16:58:12.647450 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.717094 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.717122 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.717170 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.717184 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.717195 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.820319 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.820386 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.820406 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.820432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.820451 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.904714 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.904759 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.904768 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.904786 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.904797 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: E0127 16:58:12.922175 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:12Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.926829 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.927000 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.927088 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.927206 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.927303 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: E0127 16:58:12.960281 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:12Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.965772 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.965821 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.965841 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.965867 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:12 crc kubenswrapper[5049]: I0127 16:58:12.965884 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:12Z","lastTransitionTime":"2026-01-27T16:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:12 crc kubenswrapper[5049]: E0127 16:58:12.992864 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:12Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.000196 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.000232 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.000243 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.000260 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.000270 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: E0127 16:58:13.018726 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:13Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.022456 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.022485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.022497 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.022514 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.022526 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: E0127 16:58:13.038371 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:13Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:13 crc kubenswrapper[5049]: E0127 16:58:13.038501 5049 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.040123 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.040143 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.040151 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.040164 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.040175 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.143013 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.143475 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.143610 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.143775 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.143909 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.247102 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.247175 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.247198 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.247230 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.247247 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.349904 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.350463 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.350665 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.350891 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.351138 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.453830 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.453872 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.453881 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.453897 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.453907 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.556934 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.556971 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.556980 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.556994 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.557003 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.617263 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:13 crc kubenswrapper[5049]: E0127 16:58:13.617479 5049 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:58:13 crc kubenswrapper[5049]: E0127 16:58:13.617549 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs podName:d48a67e1-cecf-41d6-a42c-52bdcd3ab892 nodeName:}" failed. No retries permitted until 2026-01-27 16:58:45.6175337 +0000 UTC m=+100.716507239 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs") pod "network-metrics-daemon-lv4sx" (UID: "d48a67e1-cecf-41d6-a42c-52bdcd3ab892") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.641768 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:46:13.083461321 +0000 UTC Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.645144 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:13 crc kubenswrapper[5049]: E0127 16:58:13.645410 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.645826 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.645828 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:13 crc kubenswrapper[5049]: E0127 16:58:13.646246 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:13 crc kubenswrapper[5049]: E0127 16:58:13.646308 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.661195 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.662443 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.662733 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.662944 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.663218 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.663430 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.767940 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.768076 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.768155 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.768188 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.768248 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.871485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.871516 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.871527 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.871543 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.871555 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.973658 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.973721 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.973739 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.973785 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:13 crc kubenswrapper[5049]: I0127 16:58:13.973798 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:13Z","lastTransitionTime":"2026-01-27T16:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.076145 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.076211 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.076229 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.076253 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.076270 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.179599 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.179653 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.179695 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.179720 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.179736 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.282412 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.282460 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.282471 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.282488 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.282504 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.386014 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.386094 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.386119 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.386203 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.386232 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.490085 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.490186 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.490206 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.490269 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.490292 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.593814 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.593849 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.593858 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.593874 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.593883 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.642555 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:55:04.84861813 +0000 UTC Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.644957 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:14 crc kubenswrapper[5049]: E0127 16:58:14.645126 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.697131 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.697163 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.697173 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.697186 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.697195 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.799667 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.799710 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.799718 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.799731 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.799743 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.902041 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.902076 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.902087 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.902104 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:14 crc kubenswrapper[5049]: I0127 16:58:14.902137 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:14Z","lastTransitionTime":"2026-01-27T16:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.005475 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.005514 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.005525 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.005544 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.005561 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.108282 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.108334 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.108348 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.108371 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.108384 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.121707 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/0.log" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.121760 5049 generic.go:334] "Generic (PLEG): container finished" podID="7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b" containerID="b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd" exitCode=1 Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.121793 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hc4th" event={"ID":"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b","Type":"ContainerDied","Data":"b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.122241 5049 scope.go:117] "RemoveContainer" containerID="b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.138339 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.159620 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.175102 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.196326 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.211347 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.212046 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.212062 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.212070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.212084 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.212093 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.226502 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.242097 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.255182 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.269196 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.285469 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:14Z\\\",\\\"message\\\":\\\"2026-01-27T16:57:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912\\\\n2026-01-27T16:57:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912 to /host/opt/cni/bin/\\\\n2026-01-27T16:57:29Z [verbose] multus-daemon started\\\\n2026-01-27T16:57:29Z [verbose] Readiness Indicator file check\\\\n2026-01-27T16:58:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.297650 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.310215 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.314771 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.314800 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.314810 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.314829 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.314843 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.323833 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.342090 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.356864 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.367870 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.384567 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.398606 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.418003 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.418040 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.418050 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.418068 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.418077 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.520804 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.520852 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.520867 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.520884 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.520893 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.623531 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.623587 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.623599 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.623623 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.623636 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.642916 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:05:21.396830541 +0000 UTC Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.645237 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:15 crc kubenswrapper[5049]: E0127 16:58:15.645394 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.645463 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:15 crc kubenswrapper[5049]: E0127 16:58:15.645609 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.645685 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:15 crc kubenswrapper[5049]: E0127 16:58:15.645742 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.664761 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.677378 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.689188 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.704288 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.715241 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.726588 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.726638 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.726652 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.726687 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.726702 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.734800 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.748860 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.763782 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.779629 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.792641 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.810959 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.824255 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:14Z\\\",\\\"message\\\":\\\"2026-01-27T16:57:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912\\\\n2026-01-27T16:57:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912 to /host/opt/cni/bin/\\\\n2026-01-27T16:57:29Z [verbose] multus-daemon started\\\\n2026-01-27T16:57:29Z [verbose] Readiness Indicator file check\\\\n2026-01-27T16:58:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.828712 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.828741 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.828750 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.828768 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.828780 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.836243 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.850841 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.865735 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.879228 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.891889 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.901574 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:15Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.931610 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.931718 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.931739 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.931765 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:15 crc kubenswrapper[5049]: I0127 16:58:15.931783 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:15Z","lastTransitionTime":"2026-01-27T16:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.034432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.034504 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.034516 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.034538 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.034553 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.129319 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/0.log" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.129395 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hc4th" event={"ID":"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b","Type":"ContainerStarted","Data":"836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.137418 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.137471 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.137486 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.137512 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.137524 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.143651 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.156198 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.171542 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.184151 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.196448 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.208839 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.221329 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.240551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.240608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.240620 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.240639 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.240651 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.243568 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.272608 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.288905 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.306069 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.318961 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:14Z\\\",\\\"message\\\":\\\"2026-01-27T16:57:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912\\\\n2026-01-27T16:57:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912 to /host/opt/cni/bin/\\\\n2026-01-27T16:57:29Z [verbose] multus-daemon started\\\\n2026-01-27T16:57:29Z [verbose] Readiness Indicator file check\\\\n2026-01-27T16:58:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:58:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.329088 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.341058 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.343740 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.343789 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.343801 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.343823 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.343835 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.353276 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.365579 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.376802 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.387085 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:16Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.447010 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.447038 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.447046 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.447064 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.447075 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.549774 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.549802 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.549811 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.549827 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.549836 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.643579 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:40:18.827464652 +0000 UTC Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.644929 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:16 crc kubenswrapper[5049]: E0127 16:58:16.645067 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.651687 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.651712 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.651723 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.651736 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.651747 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.753711 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.753800 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.753814 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.753841 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.753866 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.856430 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.856473 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.856487 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.856507 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.856521 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.959102 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.959166 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.959182 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.959209 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:16 crc kubenswrapper[5049]: I0127 16:58:16.959226 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:16Z","lastTransitionTime":"2026-01-27T16:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.062857 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.062913 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.062930 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.062958 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.062978 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.165257 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.165295 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.165302 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.165317 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.165326 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.268514 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.268555 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.268564 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.268580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.268589 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.371031 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.371069 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.371079 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.371094 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.371104 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.473327 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.473405 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.473427 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.473460 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.473482 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.576032 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.576138 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.576163 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.576193 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.576216 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.644081 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 02:53:36.123258633 +0000 UTC Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.645415 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.645541 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:17 crc kubenswrapper[5049]: E0127 16:58:17.645583 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.645618 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:17 crc kubenswrapper[5049]: E0127 16:58:17.645752 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:17 crc kubenswrapper[5049]: E0127 16:58:17.645902 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.679101 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.679186 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.679211 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.679246 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.679276 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.783148 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.783211 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.783227 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.783252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.783266 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.886066 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.886107 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.886119 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.886138 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.886151 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.989942 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.990028 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.990053 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.990089 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:17 crc kubenswrapper[5049]: I0127 16:58:17.990115 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:17Z","lastTransitionTime":"2026-01-27T16:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.094390 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.094468 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.094494 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.094528 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.094553 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.198494 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.198579 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.198603 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.198639 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.198663 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.303197 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.303250 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.303260 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.303281 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.303291 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.406179 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.406222 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.406240 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.406263 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.406283 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.510206 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.510267 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.510286 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.510312 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.510331 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.612724 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.612801 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.612821 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.612851 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.612876 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.644884 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:07:02.034477344 +0000 UTC Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.644939 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:18 crc kubenswrapper[5049]: E0127 16:58:18.645090 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.715496 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.715552 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.715562 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.715582 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.715593 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.818808 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.818840 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.818852 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.818866 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.818878 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.921849 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.921895 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.921913 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.921933 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:18 crc kubenswrapper[5049]: I0127 16:58:18.921945 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:18Z","lastTransitionTime":"2026-01-27T16:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.025376 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.025413 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.025425 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.025441 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.025453 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.129066 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.129138 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.129165 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.129197 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.129220 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.232518 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.232566 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.232575 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.232593 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.232604 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.340829 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.340879 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.340888 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.340906 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.340917 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.443596 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.443665 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.443746 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.443777 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.443794 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.547304 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.547377 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.547396 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.547423 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.547441 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.645480 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:02:29.717876041 +0000 UTC Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.645700 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.645768 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.645900 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:19 crc kubenswrapper[5049]: E0127 16:58:19.646002 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:19 crc kubenswrapper[5049]: E0127 16:58:19.646175 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:19 crc kubenswrapper[5049]: E0127 16:58:19.646260 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.649851 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.649889 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.649904 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.649926 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.649939 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.752143 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.752189 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.752203 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.752224 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.752236 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.855023 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.855073 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.855089 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.855107 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.855117 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.958486 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.958550 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.958567 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.958597 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:19 crc kubenswrapper[5049]: I0127 16:58:19.958614 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:19Z","lastTransitionTime":"2026-01-27T16:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.062304 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.062376 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.062401 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.062433 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.062453 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.167149 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.167228 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.167243 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.167272 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.167289 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.270597 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.270636 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.270646 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.270665 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.270697 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.373766 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.373821 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.373834 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.373894 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.373909 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.478179 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.478317 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.478404 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.478491 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.478521 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.581883 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.581949 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.581970 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.581997 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.582017 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.645367 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.645820 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 04:08:21.496861551 +0000 UTC Jan 27 16:58:20 crc kubenswrapper[5049]: E0127 16:58:20.646032 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.646471 5049 scope.go:117] "RemoveContainer" containerID="f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245" Jan 27 16:58:20 crc kubenswrapper[5049]: E0127 16:58:20.646803 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.685470 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.685534 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.685553 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.685582 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.685600 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.789668 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.789766 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.789790 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.789818 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.789836 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.893258 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.893356 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.893373 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.893400 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.893418 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.996304 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.996379 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.996396 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.996422 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:20 crc kubenswrapper[5049]: I0127 16:58:20.996439 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:20Z","lastTransitionTime":"2026-01-27T16:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.100153 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.100222 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.100239 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.100265 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.100282 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.203173 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.203271 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.203296 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.203330 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.203354 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.306265 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.306365 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.306383 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.306414 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.306432 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.409406 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.409484 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.409506 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.409535 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.409559 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.513518 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.513588 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.513610 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.513640 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.513660 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.617484 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.617560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.617578 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.617608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.617643 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.645927 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.645971 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.646023 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:19:12.834621207 +0000 UTC Jan 27 16:58:21 crc kubenswrapper[5049]: E0127 16:58:21.646093 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:21 crc kubenswrapper[5049]: E0127 16:58:21.646252 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.646306 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:21 crc kubenswrapper[5049]: E0127 16:58:21.646467 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.721014 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.721078 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.721096 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.721122 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.721141 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.823557 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.823621 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.823638 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.823667 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.823708 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.927600 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.927955 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.928010 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.928037 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:21 crc kubenswrapper[5049]: I0127 16:58:21.928367 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:21Z","lastTransitionTime":"2026-01-27T16:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.032646 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.032748 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.032776 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.032808 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.032833 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.136785 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.136858 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.136881 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.136911 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.136933 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.239867 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.239970 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.240002 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.240038 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.240057 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.343439 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.343544 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.343568 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.343599 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.343621 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.446965 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.447031 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.447048 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.447074 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.447092 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.550778 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.550842 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.550859 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.550887 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.550904 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.646207 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.646161 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:39:36.431236854 +0000 UTC Jan 27 16:58:22 crc kubenswrapper[5049]: E0127 16:58:22.646457 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.654800 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.654920 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.654939 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.654964 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.654981 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.759342 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.759441 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.759464 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.759489 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.759506 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.863580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.863647 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.863666 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.863725 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.863745 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.966933 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.966991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.967008 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.967034 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:22 crc kubenswrapper[5049]: I0127 16:58:22.967054 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:22Z","lastTransitionTime":"2026-01-27T16:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.070468 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.070541 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.070559 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.070589 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.070606 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.174067 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.174152 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.174194 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.174238 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.174264 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.277660 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.277750 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.277766 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.277791 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.277809 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.370583 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.370658 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.370725 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.370754 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.370775 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.395012 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:23Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.401916 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.401974 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.401986 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.402013 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.402057 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.422913 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:23Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.428231 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.428289 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.428306 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.428333 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.428351 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.451085 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:23Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.456619 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.456688 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.456702 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.456727 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.456745 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.473184 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:23Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.478867 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.478926 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.478938 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.478957 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.478971 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.497192 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:23Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.497474 5049 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.499383 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.499423 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.499432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.499451 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.499461 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.602474 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.602580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.602607 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.602634 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.602652 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.645941 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.645985 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.646155 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.646234 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.646323 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:23 crc kubenswrapper[5049]: E0127 16:58:23.646415 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.646500 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:57:17.615446821 +0000 UTC Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.705775 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.705818 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.705830 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.705853 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.705870 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.809612 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.809896 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.809921 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.809950 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.809968 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.913180 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.913253 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.913262 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.913283 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:23 crc kubenswrapper[5049]: I0127 16:58:23.913296 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:23Z","lastTransitionTime":"2026-01-27T16:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.016568 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.016645 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.016669 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.016739 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.016757 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.119628 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.119708 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.119724 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.119750 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.119955 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.223242 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.223309 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.223333 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.223361 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.223379 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.327650 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.327753 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.327766 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.327790 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.327805 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.431248 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.431311 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.431328 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.431354 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.431371 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.534559 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.534631 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.534653 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.534746 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.534779 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.638895 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.638971 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.638989 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.639024 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.639044 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.645585 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:24 crc kubenswrapper[5049]: E0127 16:58:24.645843 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.647609 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:16:28.954253606 +0000 UTC Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.742567 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.742622 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.742633 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.742650 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.742661 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.846112 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.846185 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.846206 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.846240 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.846263 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.949019 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.949081 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.949107 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.949141 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:24 crc kubenswrapper[5049]: I0127 16:58:24.949160 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:24Z","lastTransitionTime":"2026-01-27T16:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.052302 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.052354 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.052371 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.052393 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.052410 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.155572 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.155646 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.155668 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.155746 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.155772 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.259285 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.259374 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.259399 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.259431 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.259455 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.362404 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.362473 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.362495 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.362526 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.362547 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.465606 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.465732 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.465760 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.465792 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.465814 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.569291 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.569366 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.569389 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.569418 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.569435 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.645277 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.645327 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:25 crc kubenswrapper[5049]: E0127 16:58:25.645480 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.645573 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:25 crc kubenswrapper[5049]: E0127 16:58:25.645814 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:25 crc kubenswrapper[5049]: E0127 16:58:25.646110 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.648749 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:38:59.825413788 +0000 UTC Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.663144 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.672819 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.672878 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.672896 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.672919 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.672936 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.678947 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.696837 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.712264 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.723649 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.735147 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.769652 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.777396 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.777466 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.777488 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.777516 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.777535 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.785280 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.797977 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.810664 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.824560 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.835604 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:14Z\\\",\\\"message\\\":\\\"2026-01-27T16:57:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912\\\\n2026-01-27T16:57:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912 to /host/opt/cni/bin/\\\\n2026-01-27T16:57:29Z [verbose] multus-daemon started\\\\n2026-01-27T16:57:29Z [verbose] Readiness Indicator file check\\\\n2026-01-27T16:58:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:58:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.847220 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.862836 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.874538 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.879506 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.879559 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.879574 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.879593 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.879605 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.885597 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.897783 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.908753 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:25Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.982905 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.982947 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.982962 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.982983 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:25 crc kubenswrapper[5049]: I0127 16:58:25.982999 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:25Z","lastTransitionTime":"2026-01-27T16:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.085321 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.085356 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.085367 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.085385 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.085397 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.188514 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.188561 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.188575 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.188595 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.188606 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.293003 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.293109 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.293134 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.293164 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.293184 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.395593 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.395730 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.395750 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.395773 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.395787 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.499791 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.499858 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.499875 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.499902 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.499915 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.602381 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.602444 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.602462 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.602487 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.602504 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.646075 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:26 crc kubenswrapper[5049]: E0127 16:58:26.646257 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.649766 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:27:46.385854628 +0000 UTC Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.666907 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.705530 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.705569 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.705581 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.705598 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.705610 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.808895 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.808955 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.808972 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.808997 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.809016 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.911551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.911621 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.911753 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.911783 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:26 crc kubenswrapper[5049]: I0127 16:58:26.911804 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:26Z","lastTransitionTime":"2026-01-27T16:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.017093 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.017192 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.017216 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.017252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.017276 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.121921 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.122030 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.122054 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.122094 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.122120 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.226978 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.227070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.227095 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.227132 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.227159 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.330738 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.330814 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.330846 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.330877 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.330897 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.434241 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.434301 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.434318 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.434342 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.434361 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.537774 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.537849 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.537871 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.537904 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.537927 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.641879 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.641940 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.641956 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.641982 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.642000 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.645494 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.645547 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:27 crc kubenswrapper[5049]: E0127 16:58:27.645658 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.645717 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:27 crc kubenswrapper[5049]: E0127 16:58:27.645836 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:27 crc kubenswrapper[5049]: E0127 16:58:27.646132 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.649896 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:57:15.429935021 +0000 UTC Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.746202 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.746257 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.746271 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.746288 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.746300 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.849658 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.849748 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.849769 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.849796 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.849815 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.952191 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.952238 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.952248 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.952267 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:27 crc kubenswrapper[5049]: I0127 16:58:27.952280 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:27Z","lastTransitionTime":"2026-01-27T16:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.055497 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.055576 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.055596 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.055620 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.055634 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.158704 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.158769 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.158786 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.158815 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.158832 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.262161 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.262328 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.262397 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.262430 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.262454 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.365941 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.365992 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.366008 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.366033 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.366051 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.469085 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.469161 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.469189 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.469222 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.469246 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.572254 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.572328 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.572351 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.572379 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.572401 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.645412 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:28 crc kubenswrapper[5049]: E0127 16:58:28.645826 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.650988 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:29:42.650266136 +0000 UTC Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.675806 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.675861 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.675878 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.675900 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.675918 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.778660 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.778725 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.778735 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.778779 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.778793 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.881751 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.881795 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.881824 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.881841 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.881856 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.985439 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.985500 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.985522 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.985551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:28 crc kubenswrapper[5049]: I0127 16:58:28.985571 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:28Z","lastTransitionTime":"2026-01-27T16:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.088919 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.089008 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.089035 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.089066 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.089089 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.192325 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.192489 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.192601 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.192644 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.192750 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.296516 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.296644 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.296662 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.296732 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.296751 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.400403 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.400463 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.400476 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.400500 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.400513 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.503871 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.503932 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.503943 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.503965 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.503979 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.514365 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.514600 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:33.514570263 +0000 UTC m=+148.613543812 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.514825 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.514885 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.515030 5049 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.515050 5049 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.515099 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:59:33.515089267 +0000 UTC m=+148.614062816 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.515117 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 16:59:33.515109058 +0000 UTC m=+148.614082607 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.607229 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.607288 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.607304 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.607328 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.607342 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.615760 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.615824 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.615953 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.615973 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.615986 5049 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.616031 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 16:59:33.616016607 +0000 UTC m=+148.714990146 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.615953 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.616058 5049 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.616069 5049 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.616102 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 16:59:33.61609334 +0000 UTC m=+148.715066889 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.645871 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.646008 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.646027 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.646190 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.646306 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:29 crc kubenswrapper[5049]: E0127 16:58:29.646455 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.651650 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:13:59.115545757 +0000 UTC Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.710168 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.710226 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.710237 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.710257 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.710271 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.813652 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.813733 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.813742 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.813763 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.813775 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.916458 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.916495 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.916506 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.916525 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:29 crc kubenswrapper[5049]: I0127 16:58:29.916537 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:29Z","lastTransitionTime":"2026-01-27T16:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.019854 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.019940 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.019963 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.019999 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.020022 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.124326 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.124380 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.124389 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.124410 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.124425 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.227924 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.227964 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.227976 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.227996 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.228007 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.331632 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.331726 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.331748 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.331775 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.331796 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.435222 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.435285 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.435304 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.435331 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.435349 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.539150 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.539223 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.539238 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.539269 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.539287 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.643476 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.643535 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.643552 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.643577 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.643595 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.644937 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:30 crc kubenswrapper[5049]: E0127 16:58:30.645101 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.652047 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:21:30.89952565 +0000 UTC Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.748018 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.748061 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.748077 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.748108 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.748125 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.851716 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.851781 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.851797 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.851821 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.851838 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.954365 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.954420 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.954437 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.954461 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:30 crc kubenswrapper[5049]: I0127 16:58:30.954480 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:30Z","lastTransitionTime":"2026-01-27T16:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.057327 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.057386 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.057394 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.057416 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.057428 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.161574 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.162205 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.162219 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.162247 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.162267 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.266395 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.266439 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.266452 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.266476 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.266488 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.369505 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.369598 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.369618 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.369647 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.369666 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.472787 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.472853 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.472869 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.472894 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.472911 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.578432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.578501 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.578523 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.578557 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.578579 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.645801 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.645891 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:31 crc kubenswrapper[5049]: E0127 16:58:31.646090 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.646117 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:31 crc kubenswrapper[5049]: E0127 16:58:31.646291 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:31 crc kubenswrapper[5049]: E0127 16:58:31.646443 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.652311 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 13:36:44.278261898 +0000 UTC Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.684041 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.684110 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.684132 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.684173 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.684192 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.787957 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.788031 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.788057 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.788086 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.788108 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.892070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.892157 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.892182 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.892210 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.892231 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.995206 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.995357 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.995377 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.995403 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:31 crc kubenswrapper[5049]: I0127 16:58:31.995424 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:31Z","lastTransitionTime":"2026-01-27T16:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.099035 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.099105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.099131 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.099160 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.099182 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.202715 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.202807 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.202827 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.202857 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.202874 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.307315 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.307378 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.307394 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.307419 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.307436 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.411036 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.411129 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.411153 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.411188 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.411212 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.514783 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.514839 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.514850 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.514874 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.514892 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.618013 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.618086 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.618105 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.618135 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.618155 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.645863 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:32 crc kubenswrapper[5049]: E0127 16:58:32.646566 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.647045 5049 scope.go:117] "RemoveContainer" containerID="f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.653035 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:21:06.284322293 +0000 UTC Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.723216 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.723279 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.723301 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.723331 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.723355 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.827992 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.828067 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.828086 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.828109 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.828157 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.932236 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.932323 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.932379 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.932403 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:32 crc kubenswrapper[5049]: I0127 16:58:32.932993 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:32Z","lastTransitionTime":"2026-01-27T16:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.036203 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.036248 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.036260 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.036279 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.036292 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.140080 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.140172 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.140204 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.140243 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.140272 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.207419 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/2.log" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.211648 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.212418 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.234974 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.243100 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.243180 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.243196 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.243233 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.243253 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.252001 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.271181 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.297228 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41566245-fb9e-4144-99ab-5ef20566560d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d88f46ed39c5a10bdef1ddff18757fc2476df93ece7b1913b60f4b22571f4e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60bc8f0eae510e45278f4b3ed7ac73074979861d314c8eebbf13f98cc5a63f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a9044edd570a4cd74f54ae040c0d761124fc9a91d4a2472ccf7a560dca844dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c22cbeb1f4ce32c35cd0fbde6b0a6c6dfab4b8c814a84eac20ceb59416cf8baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://235941479e8424cc9b1ab7c8d1447f18835a7e8a96369200d9d8d142190be06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.314714 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.327325 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.339812 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.349646 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.349757 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.349779 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.349810 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.349832 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.360868 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.376116 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.395920 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.405948 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.417552 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.429183 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.440500 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.452498 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.452534 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.452544 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.452581 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.452592 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.455481 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.470203 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:14Z\\\",\\\"message\\\":\\\"2026-01-27T16:57:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912\\\\n2026-01-27T16:57:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912 to /host/opt/cni/bin/\\\\n2026-01-27T16:57:29Z [verbose] multus-daemon started\\\\n2026-01-27T16:57:29Z [verbose] Readiness Indicator file check\\\\n2026-01-27T16:58:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:58:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.483575 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.496006 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.508359 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.554895 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.555013 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.555088 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.555152 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.555220 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.641252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.641327 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.641340 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.641367 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.641386 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.645118 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.645208 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.645257 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.645337 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.645426 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.645547 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.655325 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 09:05:18.086252459 +0000 UTC Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.664867 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.670737 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.671010 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.671217 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.671455 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.671697 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.695335 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.699284 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.699427 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.699526 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.699642 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.699778 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.714298 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.717649 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.717713 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.717733 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.717756 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.717775 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.737962 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.742233 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.742280 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.742295 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.742317 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.742332 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.760323 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:33Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:33 crc kubenswrapper[5049]: E0127 16:58:33.760774 5049 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.762367 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.762502 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.762585 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.762698 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.762787 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.866465 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.866531 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.866551 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.866580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.866598 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.970131 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.970203 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.970230 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.970264 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:33 crc kubenswrapper[5049]: I0127 16:58:33.970292 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:33Z","lastTransitionTime":"2026-01-27T16:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.074512 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.074570 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.074589 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.074614 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.074633 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.178427 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.178485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.178510 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.178540 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.178559 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.219287 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/3.log" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.220493 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/2.log" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.231347 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" exitCode=1 Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.231422 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.231486 5049 scope.go:117] "RemoveContainer" containerID="f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.232549 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 16:58:34 crc kubenswrapper[5049]: E0127 16:58:34.232845 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.251425 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.265550 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.281655 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.281805 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.281880 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.281905 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.281938 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.281962 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.315518 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41566245-fb9e-4144-99ab-5ef20566560d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d88f46ed39c5a10bdef1ddff18757fc2476df93ece7b1913b60f4b22571f4e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60bc8f0eae510e45278f4b3ed7ac73074979861d314c8eebbf13f98cc5a63f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a9044edd570a4cd74f54ae040c0d761124fc9a91d4a2472ccf7a560dca844dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c22cbeb1f4ce32c35cd0fbde6b0a6c6dfab4b8c814a84eac20ceb59416cf8baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://235941479e8424cc9b1ab7c8d1447f18835a7e8a96369200d9d8d142190be06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.336456 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.353315 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.370815 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.385710 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.386089 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.386209 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.386326 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.386423 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.390454 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.411939 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.440385 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d1527ca8985b5c75865684474d58ca11083dd7361322a32b70ea910a46b245\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:04Z\\\",\\\"message\\\":\\\"oing to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-lv4sx]\\\\nI0127 16:58:04.892353 6813 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0127 16:58:04.892360 6813 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 16:58:04.892378 6813 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0127 16:58:04.892381 6813 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-lv4sx before timer (time: 2026-01-27 16:58:05.892444043 +0000 UTC m=+1.639671013): skip\\\\nI0127 16:58:04.892401 6813 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 64.181µs)\\\\nI0127 16:58:04.892428 6813 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 16:58:04.892445 6813 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 16:58:04.892463 6813 factory.go:656] Stopping watch factory\\\\nI0127 16:58:04.892479 6813 ovnkube.go:599] Stopped ovnkube\\\\nI0127 16:58:04.892517 6813 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 16:58:04.892544 6813 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 16:58:04.892633 6813 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"ing zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI0127 16:58:33.655057 7253 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0127 16:58:33.655068 7253 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0127 16:58:33.655039 7253 services_controller.go:434] Service openshift-kube-apiserver-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-apiserver-operator 70a45401-9850-413a-87c2-e90a7258374e 4267 0 2025-02-23 05:12:37 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:kube-apiserver-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00731395b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.457811 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.473489 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.490096 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.490159 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.490179 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.490206 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.490224 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.492754 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.509793 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.531352 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.552184 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:14Z\\\",\\\"message\\\":\\\"2026-01-27T16:57:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912\\\\n2026-01-27T16:57:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912 to /host/opt/cni/bin/\\\\n2026-01-27T16:57:29Z [verbose] multus-daemon started\\\\n2026-01-27T16:57:29Z [verbose] Readiness Indicator file check\\\\n2026-01-27T16:58:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:58:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.567611 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.587964 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.594012 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.594103 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.594122 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.594150 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.594169 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.609511 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:34Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.645960 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:34 crc kubenswrapper[5049]: E0127 16:58:34.646159 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.656317 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 11:28:14.564390491 +0000 UTC Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.697426 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.697475 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.697488 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.697507 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.697523 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.800054 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.800117 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.800135 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.800162 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.800197 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.903948 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.903998 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.904014 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.904048 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:34 crc kubenswrapper[5049]: I0127 16:58:34.904070 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:34Z","lastTransitionTime":"2026-01-27T16:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.007005 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.007062 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.007079 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.007106 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.007125 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.110279 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.110350 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.110373 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.110405 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.110423 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.214782 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.214821 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.214832 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.214851 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.214863 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.236810 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/3.log" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.241835 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 16:58:35 crc kubenswrapper[5049]: E0127 16:58:35.242112 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.259067 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.278624 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.302177 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.324598 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.324712 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.324734 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.324765 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.324792 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.333224 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.360030 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:14Z\\\",\\\"message\\\":\\\"2026-01-27T16:57:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912\\\\n2026-01-27T16:57:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912 to /host/opt/cni/bin/\\\\n2026-01-27T16:57:29Z [verbose] multus-daemon started\\\\n2026-01-27T16:57:29Z [verbose] Readiness Indicator file check\\\\n2026-01-27T16:58:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:58:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.376535 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.400424 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41566245-fb9e-4144-99ab-5ef20566560d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d88f46ed39c5a10bdef1ddff18757fc2476df93ece7b1913b60f4b22571f4e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60bc8f0eae510e45278f4b3ed7ac73074979861d314c8eebbf13f98cc5a63f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a9044edd570a4cd74f54ae040c0d761124fc9a91d4a2472ccf7a560dca844dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c22cbeb1f4ce32c35cd0fbde6b0a6c6dfab4b8c814a84eac20ceb59416cf8baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://235941479e8424cc9b1ab7c8d1447f18835a7e8a96369200d9d8d142190be06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.425898 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.436259 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.436323 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.436348 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.436503 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.436564 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.448465 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.468958 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.483482 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.501409 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.537938 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"ing zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI0127 16:58:33.655057 7253 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0127 16:58:33.655068 7253 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0127 16:58:33.655039 7253 services_controller.go:434] Service openshift-kube-apiserver-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-apiserver-operator 70a45401-9850-413a-87c2-e90a7258374e 4267 0 2025-02-23 05:12:37 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:kube-apiserver-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00731395b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.539200 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.539256 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.539276 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.539302 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.539320 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.554881 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.567119 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.586197 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.597389 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.616819 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.639918 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.641755 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.641804 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.641822 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.641848 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.641867 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.645273 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.645322 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:35 crc kubenswrapper[5049]: E0127 16:58:35.645581 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.645659 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:35 crc kubenswrapper[5049]: E0127 16:58:35.645868 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:35 crc kubenswrapper[5049]: E0127 16:58:35.646016 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.657246 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:12:53.671261232 +0000 UTC Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.666796 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.688301 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.710278 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.743110 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"ing zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI0127 16:58:33.655057 7253 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0127 16:58:33.655068 7253 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0127 16:58:33.655039 7253 services_controller.go:434] Service openshift-kube-apiserver-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-apiserver-operator 70a45401-9850-413a-87c2-e90a7258374e 4267 0 2025-02-23 05:12:37 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:kube-apiserver-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00731395b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.745314 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.745373 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.745395 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.745424 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.745446 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.764834 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.787951 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.808901 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.826883 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.841794 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63d094db-b027-49de-8ac0-427f5cd179e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://470cfe95fc3ab4c468d4ba8a1da8481a9c5f8dad62ef9932702c8f3f0c31cd0d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://719886bb1b2b3523c898a1825eaa8a120ad4d4671d573eafda2de544d7ce3f00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26865a8889b575ed087cb7da82a17fabe564e35cf2da01b4d993be4f3491b006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c216d2618cc3b19bd16a4e6c5296aa3d23663c126fdae701a043ea4d55fedf37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f9404df64b99974e4cb83a3cf71597db38e9633dcfe580047b8e0760c0a53d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb4025105d183656421a0c5d292d9f37d572bfd80d0898a019f818ee5f6e8973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://317080f2f6a49d84fa7cabfb576b49f25f1d0fa1094ce758e42a1844efe01b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl9s6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2zsnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.848292 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.848327 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.848339 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.848358 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.848370 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.854802 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hc4th" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:14Z\\\",\\\"message\\\":\\\"2026-01-27T16:57:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912\\\\n2026-01-27T16:57:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f938d83c-b2b8-44c8-8426-557e4fe5a912 to /host/opt/cni/bin/\\\\n2026-01-27T16:57:29Z [verbose] multus-daemon started\\\\n2026-01-27T16:57:29Z [verbose] Readiness Indicator file check\\\\n2026-01-27T16:58:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:58:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbbm7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hc4th\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.865477 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0683e0b9-a15b-4b54-a165-1073c0494cf7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e2114057207b1c30186107365e2dbf89cfee41faf30de19a1ae4bfe8c19c381\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a48a611a1d1c63c7a1ec17b8134dd4d33a6317c61dc23824c1d3d668f7b1e3f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7jsf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q27t9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.877201 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0bb80b18d69f66f39f1ebbc6ccfed7b12472913437bca987d8b8c3829ff4c518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.888356 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c29806db15d3ba78156dcb9617ed09047a8dfd035c6508ea4efa44b0f664fce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://277f4b9f43c78391b2b380e293ede79c9c92a2fa1375d1bf8174c2af8816dae2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.899331 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.910145 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6502fc579c7e491b54c7ffb42a9e01fd8ae2430ecf9f006e3a5b545a0bffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.920023 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b714597d-68b8-4f8f-9d55-9f1cea23324a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63a8d67fac01f39ec2f526cd2760197c6a9ddb08a5cddf401d975d4f840ccae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvr84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2d7n9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.939795 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41566245-fb9e-4144-99ab-5ef20566560d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d88f46ed39c5a10bdef1ddff18757fc2476df93ece7b1913b60f4b22571f4e99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60bc8f0eae510e45278f4b3ed7ac73074979861d314c8eebbf13f98cc5a63f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a9044edd570a4cd74f54ae040c0d761124fc9a91d4a2472ccf7a560dca844dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c22cbeb1f4ce32c35cd0fbde6b0a6c6dfab4b8c814a84eac20ceb59416cf8baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://235941479e8424cc9b1ab7c8d1447f18835a7e8a96369200d9d8d142190be06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73d35fb87d34861d569ff2f1c70ab8ecd8ba9ed65c3bb1647522b416ebf925a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18b111b7e2dc6f7853faccdbf9a45e9d46b5e8dce866626fb73b5e3e6167cab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8a38a88c078a8bdacdbdfe19c21a59c4be8ce40698d389aa81e103d7682b93b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.950617 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.950695 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.950723 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.950745 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.950759 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:35Z","lastTransitionTime":"2026-01-27T16:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.956866 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"227f3d04-5eef-4098-ba74-02c5298ec452\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 16:57:26.479662 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 16:57:26.479798 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 16:57:26.480885 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1931409684/tls.crt::/tmp/serving-cert-1931409684/tls.key\\\\\\\"\\\\nI0127 16:57:26.888601 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 16:57:26.896598 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 16:57:26.896631 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 16:57:26.896655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 16:57:26.896659 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 16:57:26.921145 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 16:57:26.921172 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921177 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 16:57:26.921182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 16:57:26.921185 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 16:57:26.921188 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 16:57:26.921191 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 16:57:26.921352 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 16:57:26.925284 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:35 crc kubenswrapper[5049]: I0127 16:58:35.967814 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l8gpm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bf0a52b-305e-49f5-b397-c66ec99f3d8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://056becc36afc2ae60d44cf7f7d44e867a7bdda3515766287d74565d33edd6ec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnlbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l8gpm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:35Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.054410 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.054483 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.054501 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.054529 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.054546 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.159857 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.159895 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.159908 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.159925 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.159936 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.262596 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.262642 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.262652 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.262671 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.262685 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.366353 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.366419 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.366437 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.366463 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.366482 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.470076 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.470154 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.470185 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.470221 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.470246 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.573827 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.573921 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.573948 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.573980 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.574005 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.645050 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:36 crc kubenswrapper[5049]: E0127 16:58:36.645304 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.657854 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:42:16.492933043 +0000 UTC Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.677609 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.677749 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.677776 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.677808 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.677829 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.781605 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.781675 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.781734 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.781766 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.781790 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.885444 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.885511 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.885528 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.885556 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.885580 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.989180 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.989247 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.989270 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.989300 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:36 crc kubenswrapper[5049]: I0127 16:58:36.989325 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:36Z","lastTransitionTime":"2026-01-27T16:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.093526 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.093595 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.093620 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.093650 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.093715 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.197518 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.197573 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.197585 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.197608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.197623 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.300719 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.300786 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.300804 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.300834 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.300853 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.404486 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.404558 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.404580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.404617 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.404643 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.508472 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.508543 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.508565 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.508590 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.508615 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.612118 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.612183 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.612200 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.612225 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.612245 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.646108 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.646109 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:37 crc kubenswrapper[5049]: E0127 16:58:37.646316 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.646466 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:37 crc kubenswrapper[5049]: E0127 16:58:37.646702 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:37 crc kubenswrapper[5049]: E0127 16:58:37.646821 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.658631 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:40:31.928839999 +0000 UTC Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.716217 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.716265 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.716281 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.716304 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.716322 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.821458 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.821521 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.821534 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.821553 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.821571 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.925429 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.925501 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.925526 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.925556 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:37 crc kubenswrapper[5049]: I0127 16:58:37.925580 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:37Z","lastTransitionTime":"2026-01-27T16:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.029363 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.029432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.029450 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.029477 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.029495 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.133108 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.133208 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.133235 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.133268 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.133290 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.236312 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.236353 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.236363 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.236380 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.236391 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.339889 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.339967 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.339995 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.340027 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.340053 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.444073 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.444122 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.444141 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.444165 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.444182 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.547861 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.547927 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.547946 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.547975 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.547992 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.645525 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:38 crc kubenswrapper[5049]: E0127 16:58:38.646095 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.651290 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.651477 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.651651 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.651875 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.652124 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.659720 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:40:38.770277701 +0000 UTC Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.755782 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.755869 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.755891 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.755920 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.755938 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.858845 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.859336 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.859480 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.859651 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.859869 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.962875 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.962922 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.962937 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.962957 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:38 crc kubenswrapper[5049]: I0127 16:58:38.962971 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:38Z","lastTransitionTime":"2026-01-27T16:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.066240 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.066674 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.066849 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.067090 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.067296 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.171259 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.171314 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.171339 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.171361 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.171373 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.273857 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.273900 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.273914 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.273936 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.273951 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.377386 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.377459 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.377486 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.377520 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.377541 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.481106 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.481179 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.481191 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.481212 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.481225 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.584339 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.584432 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.584460 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.584498 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.584528 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.645984 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.645984 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.646113 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:39 crc kubenswrapper[5049]: E0127 16:58:39.646290 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:39 crc kubenswrapper[5049]: E0127 16:58:39.646477 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:39 crc kubenswrapper[5049]: E0127 16:58:39.646660 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.660615 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:24:48.727815421 +0000 UTC Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.686956 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.687019 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.687110 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.687140 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.687162 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.791197 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.791276 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.791300 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.791338 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.791360 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.895189 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.895249 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.895263 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.895283 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.895296 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.998657 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.998760 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.998779 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.998809 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:39 crc kubenswrapper[5049]: I0127 16:58:39.998836 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:39Z","lastTransitionTime":"2026-01-27T16:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.102990 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.103056 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.103074 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.103102 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.103126 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.209308 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.209482 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.209519 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.209605 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.209669 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.313508 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.313568 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.313584 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.313611 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.313628 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.417875 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.417956 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.417976 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.418009 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.418032 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.522026 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.522091 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.522101 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.522120 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.522131 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.625770 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.625854 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.625869 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.625896 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.625928 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.645776 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:40 crc kubenswrapper[5049]: E0127 16:58:40.646007 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.661193 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:25:49.089046073 +0000 UTC Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.728900 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.728952 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.728966 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.728988 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.729006 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.832174 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.832296 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.832363 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.832401 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.832430 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.935575 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.935659 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.935726 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.935761 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:40 crc kubenswrapper[5049]: I0127 16:58:40.935823 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:40Z","lastTransitionTime":"2026-01-27T16:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.038697 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.038740 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.038751 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.038768 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.038781 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.142218 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.142256 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.142359 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.142380 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.142393 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.244814 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.244864 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.244879 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.244905 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.244919 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.347869 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.347939 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.347979 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.348005 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.348022 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.451573 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.451656 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.451741 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.451772 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.451792 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.555317 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.555383 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.555401 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.555429 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.555452 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.646085 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.646146 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:41 crc kubenswrapper[5049]: E0127 16:58:41.646274 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.646103 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:41 crc kubenswrapper[5049]: E0127 16:58:41.646460 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:41 crc kubenswrapper[5049]: E0127 16:58:41.646767 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.659610 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.659667 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.659703 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.659721 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.659737 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.661944 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:31:45.429333575 +0000 UTC Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.763372 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.763440 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.763459 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.763485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.763503 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.866889 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.866966 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.866989 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.867025 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.867050 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.970084 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.970135 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.970148 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.970168 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:41 crc kubenswrapper[5049]: I0127 16:58:41.970180 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:41Z","lastTransitionTime":"2026-01-27T16:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.074244 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.074315 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.074333 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.074364 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.074386 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.177797 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.177855 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.177874 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.177899 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.177921 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.280815 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.280894 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.280917 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.280945 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.280967 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.383957 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.384035 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.384051 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.384077 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.384092 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.487904 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.487969 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.488026 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.488053 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.488070 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.595255 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.595356 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.595378 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.595407 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.595440 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.645334 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:42 crc kubenswrapper[5049]: E0127 16:58:42.645579 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.662622 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:22:02.292896819 +0000 UTC Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.700628 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.700737 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.700764 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.700797 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.700816 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.803881 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.804001 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.804020 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.804045 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.804064 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.908225 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.908299 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.908317 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.908345 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:42 crc kubenswrapper[5049]: I0127 16:58:42.908362 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:42Z","lastTransitionTime":"2026-01-27T16:58:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.011994 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.012052 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.012070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.012096 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.012114 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.115507 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.115560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.115578 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.115602 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.115619 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.219452 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.219563 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.219580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.219642 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.219662 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.323422 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.323485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.323505 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.323529 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.323548 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.428275 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.428329 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.428349 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.428374 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.428394 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.531724 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.531788 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.531807 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.531838 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.531859 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.634665 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.634745 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.634757 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.634773 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.634785 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.645632 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:43 crc kubenswrapper[5049]: E0127 16:58:43.645983 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.646027 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:43 crc kubenswrapper[5049]: E0127 16:58:43.646227 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.646426 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:43 crc kubenswrapper[5049]: E0127 16:58:43.646825 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.663156 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 05:44:10.255778359 +0000 UTC Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.737925 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.737990 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.738014 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.738046 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.738074 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.841932 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.841991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.842005 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.842032 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.842049 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.945413 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.945477 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.945491 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.945512 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:43 crc kubenswrapper[5049]: I0127 16:58:43.945532 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:43Z","lastTransitionTime":"2026-01-27T16:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.017188 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.017248 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.017267 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.017292 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.017310 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: E0127 16:58:44.040526 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:44Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.047331 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.047419 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.047438 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.047470 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.047489 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: E0127 16:58:44.070746 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:44Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.076378 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.076464 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.076485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.076523 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.076547 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: E0127 16:58:44.099443 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:44Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.105766 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.105838 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.105856 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.105890 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.105909 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: E0127 16:58:44.127634 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:44Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.133890 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.133970 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.133994 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.134031 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.134058 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: E0127 16:58:44.158066 5049 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T16:58:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"52a9b7e1-dcbf-429a-a612-98ea421b6253\\\",\\\"systemUUID\\\":\\\"e5f883ea-bc60-48f3-8792-0d2ec56b48dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:44Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:44 crc kubenswrapper[5049]: E0127 16:58:44.158395 5049 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.160957 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.161015 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.161035 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.161070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.161095 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.265654 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.265749 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.265767 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.265791 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.265811 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.369765 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.370078 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.370130 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.370159 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.370175 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.474164 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.474254 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.474273 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.474303 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.474330 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.578234 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.578311 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.578330 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.578833 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.578865 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.645899 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:44 crc kubenswrapper[5049]: E0127 16:58:44.646100 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.664046 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:11:25.987562396 +0000 UTC Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.682431 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.682486 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.682502 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.682525 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.682544 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.787134 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.787195 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.787212 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.787234 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.787249 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.891359 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.891431 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.891449 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.891478 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.891505 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.995472 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.995537 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.995554 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.995580 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:44 crc kubenswrapper[5049]: I0127 16:58:44.995621 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:44Z","lastTransitionTime":"2026-01-27T16:58:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.099222 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.099297 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.099315 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.099348 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.099367 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.203106 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.203195 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.203221 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.203252 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.203274 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.307296 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.307384 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.307393 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.307411 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.307423 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.411011 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.411079 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.411096 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.411124 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.411145 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.514372 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.514488 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.514511 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.514544 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.514568 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.618327 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.618398 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.618456 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.618485 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.618505 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.645936 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.646032 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.646130 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:45 crc kubenswrapper[5049]: E0127 16:58:45.647213 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:45 crc kubenswrapper[5049]: E0127 16:58:45.647407 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:45 crc kubenswrapper[5049]: E0127 16:58:45.647505 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.647771 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 16:58:45 crc kubenswrapper[5049]: E0127 16:58:45.648783 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.664434 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:43:41.43229476 +0000 UTC Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.667054 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nfxkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lv4sx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.693506 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27fb4c5c-d521-4c59-bc27-ea166b4aa050\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a43e6e440ae01bd026178464ae487cc57bac0e04ebb4c4f2df41ebf2fde0a7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3d572ed3eb85c99c99c3c13852ee7f90edb48b93450d70ad1d7eef379c807b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2406d93d7334b3fdc70076fa5033d380af155a1c8e3540330179e5087f7e5b5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.714559 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ad1db96-84e1-4083-8023-4d9fdc72dc54\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29c84da654b6b287bd96bdd26e4c0ce623a1f76d3f8e043be531ec0fdceec7ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecd0d0dee40e94fd415f134723784852d44578fdad7e63bb2ee5949245772622\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.715094 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:45 crc kubenswrapper[5049]: E0127 16:58:45.715359 5049 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:58:45 crc kubenswrapper[5049]: E0127 16:58:45.715454 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs podName:d48a67e1-cecf-41d6-a42c-52bdcd3ab892 nodeName:}" failed. No retries permitted until 2026-01-27 16:59:49.715434027 +0000 UTC m=+164.814407586 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs") pod "network-metrics-daemon-lv4sx" (UID: "d48a67e1-cecf-41d6-a42c-52bdcd3ab892") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.721424 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.721565 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.721658 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.721840 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.721982 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.735788 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e217768-b7b6-48cd-8c3d-a1532a139288\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://249f9a14b67763f99a74ca0345ff25f896e6e3dd03e9f17565edc21ab9f47d0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44ecd9795591ed101f3e376c7420dfd90b5ae96cc37248e45876798f1896d8e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e70fec1e4c4101872cd6c26a3deb75d95279ff31f9bc274b5ab32d37994cbaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd672df1c715c9fc9f2b4a37d5fe097612d0a8311042b128fe209cd55407037e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.755554 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.780121 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.805992 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0ca704c-b740-43c4-845f-7de5bfa5a29c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T16:58:33Z\\\",\\\"message\\\":\\\"ing zone local for Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc in node crc\\\\nI0127 16:58:33.655057 7253 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc after 0 failed attempt(s)\\\\nI0127 16:58:33.655068 7253 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc\\\\nI0127 16:58:33.655039 7253 services_controller.go:434] Service openshift-kube-apiserver-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-kube-apiserver-operator 70a45401-9850-413a-87c2-e90a7258374e 4267 0 2025-02-23 05:12:37 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:kube-apiserver-operator] map[exclude.release.openshift.io/internal-openshift-hosted:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:kube-apiserver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc00731395b \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T16:58:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T16:57:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T16:57:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pflv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zmzbf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.819775 5049 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dzlsl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a38a905c-ad0d-4656-a52c-fdf82d861c2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T16:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91fa4a3b04717db67e302a32d79c9b0b6fa823ce268719ee9b575276b3d3988b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T16:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4qwg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T16:57:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dzlsl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T16:58:45Z is after 2025-08-24T17:21:41Z" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.825774 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.826158 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.826268 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.826376 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.826462 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.908140 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2zsnk" podStartSLOduration=78.908122663 podStartE2EDuration="1m18.908122663s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:45.907747782 +0000 UTC m=+101.006721361" watchObservedRunningTime="2026-01-27 16:58:45.908122663 +0000 UTC m=+101.007096212" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.929909 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.929964 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.929976 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.929996 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.930009 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:45Z","lastTransitionTime":"2026-01-27T16:58:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.934485 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hc4th" podStartSLOduration=78.934447473 podStartE2EDuration="1m18.934447473s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:45.934172666 +0000 UTC m=+101.033146225" watchObservedRunningTime="2026-01-27 16:58:45.934447473 +0000 UTC m=+101.033421062" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.957545 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q27t9" podStartSLOduration=78.957510063 podStartE2EDuration="1m18.957510063s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:45.956764602 +0000 UTC m=+101.055738171" watchObservedRunningTime="2026-01-27 16:58:45.957510063 +0000 UTC m=+101.056483642" Jan 27 16:58:45 crc kubenswrapper[5049]: I0127 16:58:45.990172 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.990150789 podStartE2EDuration="19.990150789s" podCreationTimestamp="2026-01-27 16:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:45.988902864 +0000 UTC m=+101.087876413" watchObservedRunningTime="2026-01-27 16:58:45.990150789 +0000 UTC m=+101.089124338" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.022693 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.02265306 podStartE2EDuration="1m19.02265306s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:46.009722432 +0000 UTC m=+101.108695981" watchObservedRunningTime="2026-01-27 16:58:46.02265306 +0000 UTC m=+101.121626609" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.032161 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.032211 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.032223 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.032241 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.032256 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.051948 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podStartSLOduration=79.051916262 podStartE2EDuration="1m19.051916262s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:46.051455549 +0000 UTC m=+101.150429098" watchObservedRunningTime="2026-01-27 16:58:46.051916262 +0000 UTC m=+101.150889811" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.065541 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-l8gpm" podStartSLOduration=80.06552344 podStartE2EDuration="1m20.06552344s" podCreationTimestamp="2026-01-27 16:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:46.065217661 +0000 UTC m=+101.164191220" watchObservedRunningTime="2026-01-27 16:58:46.06552344 +0000 UTC m=+101.164496989" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.135093 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.135165 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.135202 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.135234 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.135253 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.237824 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.237878 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.237890 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.237910 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.237930 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.341186 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.341242 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.341260 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.341286 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.341304 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.444241 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.444293 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.444306 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.444324 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.444337 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.546807 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.546844 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.546853 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.546866 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.546873 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.646127 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:46 crc kubenswrapper[5049]: E0127 16:58:46.646327 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.649119 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.649152 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.649165 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.649178 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.649188 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.664724 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:57:02.134639001 +0000 UTC Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.753243 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.753302 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.753319 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.753342 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.753360 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.856973 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.857026 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.857045 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.857070 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.857088 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.960291 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.960346 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.960360 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.960387 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:46 crc kubenswrapper[5049]: I0127 16:58:46.960406 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:46Z","lastTransitionTime":"2026-01-27T16:58:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.064085 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.064166 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.064184 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.064214 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.064233 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.177178 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.177789 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.177804 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.177825 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.177837 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.280026 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.280063 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.280071 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.280084 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.280139 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.384222 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.384301 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.384324 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.384354 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.384375 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.496549 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.496619 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.496638 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.496664 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.496723 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.599556 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.599624 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.599641 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.599699 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.599815 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.645489 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.645507 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:47 crc kubenswrapper[5049]: E0127 16:58:47.645738 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:47 crc kubenswrapper[5049]: E0127 16:58:47.645953 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.646097 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:47 crc kubenswrapper[5049]: E0127 16:58:47.646204 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.665156 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:27:43.371699631 +0000 UTC Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.704228 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.704286 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.704305 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.704333 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.704353 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.808102 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.808169 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.808189 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.808216 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.808235 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.911549 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.911618 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.911641 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.911702 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:47 crc kubenswrapper[5049]: I0127 16:58:47.911730 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:47Z","lastTransitionTime":"2026-01-27T16:58:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.014905 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.014964 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.014982 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.015011 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.015030 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.118636 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.118787 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.118812 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.118838 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.118859 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.222029 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.222098 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.222116 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.222142 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.222159 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.325480 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.325560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.325597 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.325631 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.325652 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.429478 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.429552 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.429574 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.429608 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.429634 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.533878 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.533956 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.533981 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.534012 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.534035 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.637129 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.637205 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.637230 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.637264 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.637293 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.645511 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:48 crc kubenswrapper[5049]: E0127 16:58:48.645734 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.666292 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:51:20.415007232 +0000 UTC Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.740991 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.741053 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.741072 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.741109 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.741133 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.845196 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.845258 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.845276 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.845303 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.845327 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.948983 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.949050 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.949071 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.949098 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:48 crc kubenswrapper[5049]: I0127 16:58:48.949117 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:48Z","lastTransitionTime":"2026-01-27T16:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.052589 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.052661 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.052710 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.052739 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.052757 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.156917 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.156988 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.157013 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.157044 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.157068 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.261054 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.261114 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.261131 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.261156 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.261174 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.363822 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.364760 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.364961 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.365113 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.365256 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.469291 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.469414 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.469443 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.469476 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.469498 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.572852 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.572923 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.572945 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.572978 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.573003 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.645993 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.646086 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.646086 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:49 crc kubenswrapper[5049]: E0127 16:58:49.646205 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:49 crc kubenswrapper[5049]: E0127 16:58:49.646325 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:49 crc kubenswrapper[5049]: E0127 16:58:49.646492 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.666489 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:08:07.517367892 +0000 UTC Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.676261 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.676334 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.676368 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.676401 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.676426 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.779727 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.779806 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.779824 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.779851 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.779869 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.883261 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.883348 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.883366 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.883394 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.883413 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.986996 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.987054 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.987073 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.987098 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:49 crc kubenswrapper[5049]: I0127 16:58:49.987116 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:49Z","lastTransitionTime":"2026-01-27T16:58:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.090572 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.090909 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.090937 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.090969 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.090994 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.194074 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.194482 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.194648 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.194849 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.194994 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.297664 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.297765 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.297785 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.297814 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.297835 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.401194 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.401328 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.401357 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.401409 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.401435 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.505202 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.505241 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.505277 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.505294 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.505303 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.608401 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.608455 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.608467 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.608489 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.608503 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.645016 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:50 crc kubenswrapper[5049]: E0127 16:58:50.645200 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.667170 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:39:55.282321828 +0000 UTC Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.711758 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.711832 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.711854 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.711886 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.711908 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.814592 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.814633 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.814648 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.814699 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.814727 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.917872 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.917938 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.917955 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.917983 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:50 crc kubenswrapper[5049]: I0127 16:58:50.918001 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:50Z","lastTransitionTime":"2026-01-27T16:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.021349 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.021403 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.021420 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.021445 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.021461 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:51Z","lastTransitionTime":"2026-01-27T16:58:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.645440 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.645523 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:52 crc kubenswrapper[5049]: E0127 16:58:51.645598 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.645624 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:52 crc kubenswrapper[5049]: E0127 16:58:51.645829 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:52 crc kubenswrapper[5049]: E0127 16:58:51.645903 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:51.668031 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:25:00.924020697 +0000 UTC Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.232954 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.233267 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.233437 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.233599 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.233789 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:52Z","lastTransitionTime":"2026-01-27T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.337185 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.337463 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.337474 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.337493 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.337505 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:52Z","lastTransitionTime":"2026-01-27T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.440103 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.440148 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.440161 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.440181 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.440196 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:52Z","lastTransitionTime":"2026-01-27T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.544101 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.544565 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.544752 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.544911 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.545057 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:52Z","lastTransitionTime":"2026-01-27T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.645324 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:52 crc kubenswrapper[5049]: E0127 16:58:52.645540 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.648442 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.648505 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.648522 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.648546 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.648563 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:52Z","lastTransitionTime":"2026-01-27T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.668625 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:37:59.133370387 +0000 UTC Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.752342 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.752413 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.752426 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.752450 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.752465 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:52Z","lastTransitionTime":"2026-01-27T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.855799 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.855870 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.855889 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.855918 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.855936 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:52Z","lastTransitionTime":"2026-01-27T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.959348 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.959422 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.959447 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.959478 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:52 crc kubenswrapper[5049]: I0127 16:58:52.959497 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:52Z","lastTransitionTime":"2026-01-27T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.063560 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.063625 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.063639 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.063665 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.063713 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.167745 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.167824 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.167841 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.167865 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.167879 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.270869 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.270945 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.270962 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.270986 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.271004 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.374599 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.374661 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.374714 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.374750 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.374772 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.479163 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.479248 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.479270 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.479313 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.479334 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.583663 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.583786 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.583805 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.583846 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.583876 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.645858 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.646025 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:53 crc kubenswrapper[5049]: E0127 16:58:53.646083 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.646176 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:53 crc kubenswrapper[5049]: E0127 16:58:53.646304 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:53 crc kubenswrapper[5049]: E0127 16:58:53.646482 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.669584 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:38:17.508400558 +0000 UTC Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.687193 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.687268 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.687321 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.687351 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.687369 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.790768 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.790829 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.790850 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.790879 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.790897 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.894136 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.894206 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.894228 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.894259 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.894281 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.997654 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.997759 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.997779 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.997804 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:53 crc kubenswrapper[5049]: I0127 16:58:53.997821 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:53Z","lastTransitionTime":"2026-01-27T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.101156 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.101219 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.101236 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.101261 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.101278 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:54Z","lastTransitionTime":"2026-01-27T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.204995 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.205055 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.205078 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.205110 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.205132 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:54Z","lastTransitionTime":"2026-01-27T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.308997 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.309044 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.309061 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.309087 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.309105 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:54Z","lastTransitionTime":"2026-01-27T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.336472 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.336542 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.336567 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.336602 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.336632 5049 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T16:58:54Z","lastTransitionTime":"2026-01-27T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.412489 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9"] Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.413510 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.425991 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9e169ff-83b1-4161-b7ae-2dad41e2b141-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.426168 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b9e169ff-83b1-4161-b7ae-2dad41e2b141-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.426217 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b9e169ff-83b1-4161-b7ae-2dad41e2b141-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.426260 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9e169ff-83b1-4161-b7ae-2dad41e2b141-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.426316 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b9e169ff-83b1-4161-b7ae-2dad41e2b141-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.427032 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.430242 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.431187 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.431999 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.463164 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.46313939 podStartE2EDuration="1m27.46313939s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:54.449191943 +0000 UTC m=+109.548165552" watchObservedRunningTime="2026-01-27 16:58:54.46313939 +0000 UTC m=+109.562112949" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.463357 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=41.463349116 podStartE2EDuration="41.463349116s" podCreationTimestamp="2026-01-27 16:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:54.46313692 +0000 UTC m=+109.562110489" watchObservedRunningTime="2026-01-27 16:58:54.463349116 +0000 UTC m=+109.562322675" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.498766 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=62.498739048 podStartE2EDuration="1m2.498739048s" podCreationTimestamp="2026-01-27 16:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:54.483249928 +0000 UTC m=+109.582223567" watchObservedRunningTime="2026-01-27 16:58:54.498739048 +0000 UTC m=+109.597712617" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.526992 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b9e169ff-83b1-4161-b7ae-2dad41e2b141-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.527046 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b9e169ff-83b1-4161-b7ae-2dad41e2b141-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.527082 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9e169ff-83b1-4161-b7ae-2dad41e2b141-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.527132 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b9e169ff-83b1-4161-b7ae-2dad41e2b141-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.527169 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9e169ff-83b1-4161-b7ae-2dad41e2b141-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.527205 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b9e169ff-83b1-4161-b7ae-2dad41e2b141-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.527385 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b9e169ff-83b1-4161-b7ae-2dad41e2b141-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.529325 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b9e169ff-83b1-4161-b7ae-2dad41e2b141-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.538151 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9e169ff-83b1-4161-b7ae-2dad41e2b141-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.555146 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9e169ff-83b1-4161-b7ae-2dad41e2b141-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bshx9\" (UID: \"b9e169ff-83b1-4161-b7ae-2dad41e2b141\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.588665 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dzlsl" podStartSLOduration=88.588628472 podStartE2EDuration="1m28.588628472s" podCreationTimestamp="2026-01-27 16:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:54.572111053 +0000 UTC m=+109.671084692" watchObservedRunningTime="2026-01-27 16:58:54.588628472 +0000 UTC m=+109.687602071" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.645186 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:54 crc kubenswrapper[5049]: E0127 16:58:54.645381 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.670495 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 17:24:25.936735673 +0000 UTC Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.670564 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.682664 5049 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 16:58:54 crc kubenswrapper[5049]: I0127 16:58:54.745419 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" Jan 27 16:58:55 crc kubenswrapper[5049]: I0127 16:58:55.250446 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" event={"ID":"b9e169ff-83b1-4161-b7ae-2dad41e2b141","Type":"ContainerStarted","Data":"0761da3804a3e846f541c83e994972769f884ae78cc0c3119517d87e884040e7"} Jan 27 16:58:55 crc kubenswrapper[5049]: I0127 16:58:55.250573 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" event={"ID":"b9e169ff-83b1-4161-b7ae-2dad41e2b141","Type":"ContainerStarted","Data":"7ffd832c542a167ecdbf88f155c65bd7193f4c4e89a16e5015abe58a220bcce5"} Jan 27 16:58:55 crc kubenswrapper[5049]: I0127 16:58:55.272860 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bshx9" podStartSLOduration=88.272829223 podStartE2EDuration="1m28.272829223s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:58:55.271545947 +0000 UTC m=+110.370519556" watchObservedRunningTime="2026-01-27 16:58:55.272829223 +0000 UTC m=+110.371802802" Jan 27 16:58:55 crc kubenswrapper[5049]: I0127 16:58:55.645461 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:55 crc kubenswrapper[5049]: I0127 16:58:55.645915 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:55 crc kubenswrapper[5049]: I0127 16:58:55.646070 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:55 crc kubenswrapper[5049]: E0127 16:58:55.648247 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:55 crc kubenswrapper[5049]: E0127 16:58:55.648372 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:55 crc kubenswrapper[5049]: E0127 16:58:55.648494 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:56 crc kubenswrapper[5049]: I0127 16:58:56.645462 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:56 crc kubenswrapper[5049]: E0127 16:58:56.645959 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:56 crc kubenswrapper[5049]: I0127 16:58:56.646284 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 16:58:56 crc kubenswrapper[5049]: E0127 16:58:56.646463 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:58:57 crc kubenswrapper[5049]: I0127 16:58:57.645830 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:57 crc kubenswrapper[5049]: I0127 16:58:57.645881 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:57 crc kubenswrapper[5049]: I0127 16:58:57.645894 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:57 crc kubenswrapper[5049]: E0127 16:58:57.646259 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:57 crc kubenswrapper[5049]: E0127 16:58:57.646424 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:57 crc kubenswrapper[5049]: E0127 16:58:57.646485 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:58:58 crc kubenswrapper[5049]: I0127 16:58:58.645180 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:58:58 crc kubenswrapper[5049]: E0127 16:58:58.645351 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:58:59 crc kubenswrapper[5049]: I0127 16:58:59.646050 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:58:59 crc kubenswrapper[5049]: E0127 16:58:59.646214 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:58:59 crc kubenswrapper[5049]: I0127 16:58:59.646315 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:58:59 crc kubenswrapper[5049]: I0127 16:58:59.646398 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:58:59 crc kubenswrapper[5049]: E0127 16:58:59.646538 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:58:59 crc kubenswrapper[5049]: E0127 16:58:59.646919 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:00 crc kubenswrapper[5049]: I0127 16:59:00.645562 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:00 crc kubenswrapper[5049]: E0127 16:59:00.645789 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.281307 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/1.log" Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.283895 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/0.log" Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.283970 5049 generic.go:334] "Generic (PLEG): container finished" podID="7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b" containerID="836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0" exitCode=1 Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.284041 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hc4th" event={"ID":"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b","Type":"ContainerDied","Data":"836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0"} Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.284116 5049 scope.go:117] "RemoveContainer" containerID="b60acc2d82e591077df9908c7981776251dc1673724fb08da098c025c8105afd" Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.285038 5049 scope.go:117] "RemoveContainer" containerID="836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0" Jan 27 16:59:01 crc kubenswrapper[5049]: E0127 16:59:01.285356 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hc4th_openshift-multus(7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b)\"" pod="openshift-multus/multus-hc4th" podUID="7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b" Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.645927 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.645979 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:01 crc kubenswrapper[5049]: E0127 16:59:01.646215 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:01 crc kubenswrapper[5049]: I0127 16:59:01.646323 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:01 crc kubenswrapper[5049]: E0127 16:59:01.646528 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:01 crc kubenswrapper[5049]: E0127 16:59:01.646721 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:02 crc kubenswrapper[5049]: I0127 16:59:02.293061 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/1.log" Jan 27 16:59:02 crc kubenswrapper[5049]: I0127 16:59:02.645822 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:02 crc kubenswrapper[5049]: E0127 16:59:02.645985 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:03 crc kubenswrapper[5049]: I0127 16:59:03.645103 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:03 crc kubenswrapper[5049]: I0127 16:59:03.645107 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:03 crc kubenswrapper[5049]: E0127 16:59:03.645572 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:03 crc kubenswrapper[5049]: I0127 16:59:03.645157 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:03 crc kubenswrapper[5049]: E0127 16:59:03.645723 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:03 crc kubenswrapper[5049]: E0127 16:59:03.645928 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:04 crc kubenswrapper[5049]: I0127 16:59:04.645262 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:04 crc kubenswrapper[5049]: E0127 16:59:04.645491 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:05 crc kubenswrapper[5049]: E0127 16:59:05.633301 5049 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 16:59:05 crc kubenswrapper[5049]: I0127 16:59:05.645422 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:05 crc kubenswrapper[5049]: I0127 16:59:05.645420 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:05 crc kubenswrapper[5049]: E0127 16:59:05.647653 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:05 crc kubenswrapper[5049]: I0127 16:59:05.647770 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:05 crc kubenswrapper[5049]: E0127 16:59:05.648037 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:05 crc kubenswrapper[5049]: E0127 16:59:05.648143 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:05 crc kubenswrapper[5049]: E0127 16:59:05.752052 5049 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 16:59:06 crc kubenswrapper[5049]: I0127 16:59:06.645576 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:06 crc kubenswrapper[5049]: E0127 16:59:06.645868 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:07 crc kubenswrapper[5049]: I0127 16:59:07.645538 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:07 crc kubenswrapper[5049]: E0127 16:59:07.645833 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:07 crc kubenswrapper[5049]: I0127 16:59:07.645910 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:07 crc kubenswrapper[5049]: I0127 16:59:07.646108 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:07 crc kubenswrapper[5049]: E0127 16:59:07.646141 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:07 crc kubenswrapper[5049]: E0127 16:59:07.646386 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:08 crc kubenswrapper[5049]: I0127 16:59:08.645350 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:08 crc kubenswrapper[5049]: E0127 16:59:08.645780 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:09 crc kubenswrapper[5049]: I0127 16:59:09.645986 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:09 crc kubenswrapper[5049]: I0127 16:59:09.646041 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:09 crc kubenswrapper[5049]: E0127 16:59:09.646204 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:09 crc kubenswrapper[5049]: I0127 16:59:09.646254 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:09 crc kubenswrapper[5049]: E0127 16:59:09.646426 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:09 crc kubenswrapper[5049]: E0127 16:59:09.646600 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:10 crc kubenswrapper[5049]: I0127 16:59:10.645512 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:10 crc kubenswrapper[5049]: E0127 16:59:10.645696 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:10 crc kubenswrapper[5049]: I0127 16:59:10.646449 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 16:59:10 crc kubenswrapper[5049]: E0127 16:59:10.646650 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zmzbf_openshift-ovn-kubernetes(b0ca704c-b740-43c4-845f-7de5bfa5a29c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" Jan 27 16:59:10 crc kubenswrapper[5049]: E0127 16:59:10.753632 5049 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 16:59:11 crc kubenswrapper[5049]: I0127 16:59:11.646062 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:11 crc kubenswrapper[5049]: I0127 16:59:11.646220 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:11 crc kubenswrapper[5049]: E0127 16:59:11.646284 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:11 crc kubenswrapper[5049]: I0127 16:59:11.646103 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:11 crc kubenswrapper[5049]: E0127 16:59:11.646452 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:11 crc kubenswrapper[5049]: E0127 16:59:11.646764 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:12 crc kubenswrapper[5049]: I0127 16:59:12.645521 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:12 crc kubenswrapper[5049]: E0127 16:59:12.645734 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:13 crc kubenswrapper[5049]: I0127 16:59:13.645110 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:13 crc kubenswrapper[5049]: I0127 16:59:13.645154 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:13 crc kubenswrapper[5049]: I0127 16:59:13.645271 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:13 crc kubenswrapper[5049]: E0127 16:59:13.645473 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:13 crc kubenswrapper[5049]: E0127 16:59:13.645980 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:13 crc kubenswrapper[5049]: E0127 16:59:13.646091 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:14 crc kubenswrapper[5049]: I0127 16:59:14.645938 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:14 crc kubenswrapper[5049]: E0127 16:59:14.646410 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:14 crc kubenswrapper[5049]: I0127 16:59:14.646564 5049 scope.go:117] "RemoveContainer" containerID="836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0" Jan 27 16:59:15 crc kubenswrapper[5049]: I0127 16:59:15.348713 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/1.log" Jan 27 16:59:15 crc kubenswrapper[5049]: I0127 16:59:15.349089 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hc4th" event={"ID":"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b","Type":"ContainerStarted","Data":"9dbc006b8f9a5b749592de92d059f3931ab80241487ffca677bd8d2d860efbbb"} Jan 27 16:59:15 crc kubenswrapper[5049]: I0127 16:59:15.645626 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:15 crc kubenswrapper[5049]: I0127 16:59:15.645633 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:15 crc kubenswrapper[5049]: I0127 16:59:15.645724 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:15 crc kubenswrapper[5049]: E0127 16:59:15.649000 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:15 crc kubenswrapper[5049]: E0127 16:59:15.649112 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:15 crc kubenswrapper[5049]: E0127 16:59:15.649199 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:15 crc kubenswrapper[5049]: E0127 16:59:15.754266 5049 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 16:59:16 crc kubenswrapper[5049]: I0127 16:59:16.645922 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:16 crc kubenswrapper[5049]: E0127 16:59:16.646098 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:17 crc kubenswrapper[5049]: I0127 16:59:17.646105 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:17 crc kubenswrapper[5049]: I0127 16:59:17.646191 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:17 crc kubenswrapper[5049]: E0127 16:59:17.646333 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:17 crc kubenswrapper[5049]: E0127 16:59:17.646477 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:17 crc kubenswrapper[5049]: I0127 16:59:17.646551 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:17 crc kubenswrapper[5049]: E0127 16:59:17.646642 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:18 crc kubenswrapper[5049]: I0127 16:59:18.645337 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:18 crc kubenswrapper[5049]: E0127 16:59:18.645558 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:19 crc kubenswrapper[5049]: I0127 16:59:19.645517 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:19 crc kubenswrapper[5049]: I0127 16:59:19.645558 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:19 crc kubenswrapper[5049]: E0127 16:59:19.645764 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:19 crc kubenswrapper[5049]: I0127 16:59:19.645902 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:19 crc kubenswrapper[5049]: E0127 16:59:19.646080 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:19 crc kubenswrapper[5049]: E0127 16:59:19.646162 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:20 crc kubenswrapper[5049]: I0127 16:59:20.645241 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:20 crc kubenswrapper[5049]: E0127 16:59:20.645466 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:20 crc kubenswrapper[5049]: E0127 16:59:20.756116 5049 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 16:59:21 crc kubenswrapper[5049]: I0127 16:59:21.646167 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:21 crc kubenswrapper[5049]: I0127 16:59:21.646218 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:21 crc kubenswrapper[5049]: E0127 16:59:21.646444 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:21 crc kubenswrapper[5049]: I0127 16:59:21.646508 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:21 crc kubenswrapper[5049]: E0127 16:59:21.646747 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:21 crc kubenswrapper[5049]: E0127 16:59:21.646964 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:22 crc kubenswrapper[5049]: I0127 16:59:22.645487 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:22 crc kubenswrapper[5049]: E0127 16:59:22.645751 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:23 crc kubenswrapper[5049]: I0127 16:59:23.645934 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:23 crc kubenswrapper[5049]: I0127 16:59:23.645936 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:23 crc kubenswrapper[5049]: I0127 16:59:23.646119 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:23 crc kubenswrapper[5049]: E0127 16:59:23.646277 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:23 crc kubenswrapper[5049]: E0127 16:59:23.646599 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:23 crc kubenswrapper[5049]: E0127 16:59:23.646651 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:24 crc kubenswrapper[5049]: I0127 16:59:24.645920 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:24 crc kubenswrapper[5049]: E0127 16:59:24.646135 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:25 crc kubenswrapper[5049]: I0127 16:59:25.645744 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:25 crc kubenswrapper[5049]: E0127 16:59:25.647609 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:25 crc kubenswrapper[5049]: I0127 16:59:25.647771 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:25 crc kubenswrapper[5049]: I0127 16:59:25.648483 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:25 crc kubenswrapper[5049]: E0127 16:59:25.648609 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:25 crc kubenswrapper[5049]: E0127 16:59:25.649005 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:25 crc kubenswrapper[5049]: I0127 16:59:25.649046 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 16:59:25 crc kubenswrapper[5049]: E0127 16:59:25.756861 5049 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 16:59:26 crc kubenswrapper[5049]: I0127 16:59:26.397925 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/3.log" Jan 27 16:59:26 crc kubenswrapper[5049]: I0127 16:59:26.401619 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerStarted","Data":"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac"} Jan 27 16:59:26 crc kubenswrapper[5049]: I0127 16:59:26.645709 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:26 crc kubenswrapper[5049]: E0127 16:59:26.645888 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:27 crc kubenswrapper[5049]: I0127 16:59:27.194297 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lv4sx"] Jan 27 16:59:27 crc kubenswrapper[5049]: I0127 16:59:27.194465 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:27 crc kubenswrapper[5049]: E0127 16:59:27.194590 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:27 crc kubenswrapper[5049]: I0127 16:59:27.408081 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:59:27 crc kubenswrapper[5049]: I0127 16:59:27.438070 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podStartSLOduration=120.438023153 podStartE2EDuration="2m0.438023153s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:27.436888569 +0000 UTC m=+142.535862188" watchObservedRunningTime="2026-01-27 16:59:27.438023153 +0000 UTC m=+142.536996752" Jan 27 16:59:27 crc kubenswrapper[5049]: I0127 16:59:27.645947 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:27 crc kubenswrapper[5049]: I0127 16:59:27.645994 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:27 crc kubenswrapper[5049]: E0127 16:59:27.646089 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:27 crc kubenswrapper[5049]: E0127 16:59:27.646284 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:28 crc kubenswrapper[5049]: I0127 16:59:28.645244 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:28 crc kubenswrapper[5049]: E0127 16:59:28.645472 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:29 crc kubenswrapper[5049]: I0127 16:59:29.645942 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:29 crc kubenswrapper[5049]: I0127 16:59:29.646057 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:29 crc kubenswrapper[5049]: E0127 16:59:29.646094 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 16:59:29 crc kubenswrapper[5049]: I0127 16:59:29.646141 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:29 crc kubenswrapper[5049]: E0127 16:59:29.646254 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 16:59:29 crc kubenswrapper[5049]: E0127 16:59:29.646568 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lv4sx" podUID="d48a67e1-cecf-41d6-a42c-52bdcd3ab892" Jan 27 16:59:30 crc kubenswrapper[5049]: I0127 16:59:30.646034 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:30 crc kubenswrapper[5049]: E0127 16:59:30.646239 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.645845 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.646086 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.646146 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.658879 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.659090 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.659960 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.660001 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.660389 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 16:59:31 crc kubenswrapper[5049]: I0127 16:59:31.660381 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 16:59:32 crc kubenswrapper[5049]: I0127 16:59:32.645512 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.523644 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:33 crc kubenswrapper[5049]: E0127 16:59:33.523879 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 17:01:35.523851695 +0000 UTC m=+270.622825284 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.523944 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.523999 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.525778 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.534220 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.625001 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.625093 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.631164 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.632213 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.788232 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.812406 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 16:59:33 crc kubenswrapper[5049]: I0127 16:59:33.867160 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:34 crc kubenswrapper[5049]: W0127 16:59:34.118176 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-79b0e4601e2a4f87d60cd5c8469c33dde90ea98482f7307aaedf6b35dce6fc40 WatchSource:0}: Error finding container 79b0e4601e2a4f87d60cd5c8469c33dde90ea98482f7307aaedf6b35dce6fc40: Status 404 returned error can't find the container with id 79b0e4601e2a4f87d60cd5c8469c33dde90ea98482f7307aaedf6b35dce6fc40 Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.440828 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e3531ea8177864ed32319f380e63c524b3a96ded6a7a4b10dfe4fe7e6c1c1f6d"} Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.441329 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"566ec60ab8bb7f92c3dec45a86f89e59b5c4cc0be817d1e021c35e7ee90aab44"} Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.443901 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c0bed8b244cbddd70f226bf8969dd7233aef0eb357f0da388de945fa5fb2d2fd"} Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.443975 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"79b0e4601e2a4f87d60cd5c8469c33dde90ea98482f7307aaedf6b35dce6fc40"} Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.444194 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.446840 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c2238c022884ab0f2487020cc39e0b9856123f269f62b2a7719cdeedf5ba5f00"} Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.446912 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8d26f98668217d4f293500bff6823c2e6388cd9c2da18c5084942a63300732a5"} Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.940006 5049 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.997635 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt"] Jan 27 16:59:34 crc kubenswrapper[5049]: I0127 16:59:34.998635 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.007158 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.008017 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.010762 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.013783 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.014750 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.015420 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.016079 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.024636 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.025082 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.025524 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.025588 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.025918 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.025979 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.026278 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.026549 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.027943 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.028086 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.028332 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.028494 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.028730 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.029100 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.029216 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.029249 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.029497 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.030438 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.031473 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qxz5r"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.032315 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.033311 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tn44m"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.033810 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.039595 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.040221 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.040542 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.041104 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.041361 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.041592 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.041622 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.042964 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043013 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qm52\" (UniqueName: \"kubernetes.io/projected/421d5aab-975e-49c9-8ea4-eef2e635c7f7-kube-api-access-2qm52\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043064 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-client-ca\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043096 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-dir\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043126 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043157 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a320bf5-a2af-4d11-9656-802d906d46b9-audit-dir\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043189 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-config\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043291 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-policies\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043341 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043387 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043422 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a342c4-f1fe-4574-b415-294717ae5c7f-config\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043449 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043457 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043474 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhq7\" (UniqueName: \"kubernetes.io/projected/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-kube-api-access-ddhq7\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043495 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043520 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-encryption-config\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043556 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043583 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zs2b\" (UniqueName: \"kubernetes.io/projected/4a320bf5-a2af-4d11-9656-802d906d46b9-kube-api-access-5zs2b\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043601 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043625 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/421d5aab-975e-49c9-8ea4-eef2e635c7f7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043646 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6sjc\" (UniqueName: \"kubernetes.io/projected/e9b333e1-888e-4515-b954-c8cbfd4af83a-kube-api-access-z6sjc\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043689 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043712 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043736 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/421d5aab-975e-49c9-8ea4-eef2e635c7f7-metrics-tls\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043651 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043805 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b333e1-888e-4515-b954-c8cbfd4af83a-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043863 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043904 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043924 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.043969 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5a342c4-f1fe-4574-b415-294717ae5c7f-trusted-ca\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044008 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044007 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-serving-cert\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044113 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb84g\" (UniqueName: \"kubernetes.io/projected/b7637684-717f-4bf3-bba2-cd3dec71715d-kube-api-access-mb84g\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044149 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/421d5aab-975e-49c9-8ea4-eef2e635c7f7-trusted-ca\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044180 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044205 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044257 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a342c4-f1fe-4574-b415-294717ae5c7f-serving-cert\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044286 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044313 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-etcd-client\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044346 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwgg\" (UniqueName: \"kubernetes.io/projected/a5a342c4-f1fe-4574-b415-294717ae5c7f-kube-api-access-mtwgg\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.044380 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-audit-policies\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.048800 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.053976 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.054525 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.054627 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.054884 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.055092 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.055136 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.055291 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.055468 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.055603 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.055770 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.056409 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.063210 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.063254 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.063379 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.063896 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.069230 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.072200 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.072787 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.084649 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.085048 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.084661 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.085040 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.088873 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c9dvb"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.089656 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.089833 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.090022 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.098661 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.099066 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.100251 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.100362 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.100798 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.112572 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.114305 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.114431 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.115170 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.119804 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmn44"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.120447 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.120550 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.120901 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qnqlr"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.120997 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.121255 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pbgwv"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.121292 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.121726 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.122143 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.122427 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.124428 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.125297 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.125901 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.126265 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.126463 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.126812 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.127017 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.127210 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.127408 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.127611 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.127819 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.128009 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.129664 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-msgbv"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.130159 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.130412 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cpnxt"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.130654 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.130897 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.131246 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.135792 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.137921 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-msgbv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.137985 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.138287 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-q5xl9"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.138530 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.138806 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.138940 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.138947 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.139066 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.139263 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttw4x"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.139207 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.139491 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.139714 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.139736 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.139957 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.140560 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.156597 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5pfl"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.157703 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159373 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159523 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159625 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zs2b\" (UniqueName: \"kubernetes.io/projected/4a320bf5-a2af-4d11-9656-802d906d46b9-kube-api-access-5zs2b\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159723 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/421d5aab-975e-49c9-8ea4-eef2e635c7f7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159782 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4d4c630-42d2-490d-8782-1fdb7723181d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159826 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159874 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6sjc\" (UniqueName: \"kubernetes.io/projected/e9b333e1-888e-4515-b954-c8cbfd4af83a-kube-api-access-z6sjc\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159925 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4d4c630-42d2-490d-8782-1fdb7723181d-images\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159967 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160002 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/421d5aab-975e-49c9-8ea4-eef2e635c7f7-metrics-tls\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160042 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b333e1-888e-4515-b954-c8cbfd4af83a-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160083 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160126 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5a342c4-f1fe-4574-b415-294717ae5c7f-trusted-ca\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160170 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160216 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-serving-cert\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160287 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb84g\" (UniqueName: \"kubernetes.io/projected/b7637684-717f-4bf3-bba2-cd3dec71715d-kube-api-access-mb84g\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160356 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/421d5aab-975e-49c9-8ea4-eef2e635c7f7-trusted-ca\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160425 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a342c4-f1fe-4574-b415-294717ae5c7f-serving-cert\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160479 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160520 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160561 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160607 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vg8f\" (UniqueName: \"kubernetes.io/projected/e4d4c630-42d2-490d-8782-1fdb7723181d-kube-api-access-4vg8f\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160654 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160739 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-etcd-client\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160790 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160835 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtwgg\" (UniqueName: \"kubernetes.io/projected/a5a342c4-f1fe-4574-b415-294717ae5c7f-kube-api-access-mtwgg\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160885 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-audit-policies\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160927 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160973 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qm52\" (UniqueName: \"kubernetes.io/projected/421d5aab-975e-49c9-8ea4-eef2e635c7f7-kube-api-access-2qm52\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161011 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-config\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161094 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-client-ca\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161145 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-dir\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161188 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161228 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a320bf5-a2af-4d11-9656-802d906d46b9-audit-dir\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161273 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-config\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161402 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a80192f5-0bea-48df-a5b5-cae9402eb6fe-serving-cert\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161429 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161471 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-policies\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161534 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161613 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a342c4-f1fe-4574-b415-294717ae5c7f-config\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161662 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161703 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161722 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddhq7\" (UniqueName: \"kubernetes.io/projected/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-kube-api-access-ddhq7\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161746 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6h5k\" (UniqueName: \"kubernetes.io/projected/a80192f5-0bea-48df-a5b5-cae9402eb6fe-kube-api-access-t6h5k\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161766 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-encryption-config\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161782 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161800 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d4c630-42d2-490d-8782-1fdb7723181d-config\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.162253 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.165178 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/421d5aab-975e-49c9-8ea4-eef2e635c7f7-trusted-ca\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.166125 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-audit-policies\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.176811 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.177175 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-serving-cert\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.177643 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.177916 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-client-ca\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.178034 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-dir\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.178158 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5a342c4-f1fe-4574-b415-294717ae5c7f-serving-cert\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.178780 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.180227 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.180739 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-config\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.180790 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-policies\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159398 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.181007 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159508 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.181331 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.181660 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.181840 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.159559 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.182310 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5a342c4-f1fe-4574-b415-294717ae5c7f-config\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.183368 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.183534 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.184508 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.185688 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.186545 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-encryption-config\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.186771 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.187024 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.187313 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.187438 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.189180 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160447 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160532 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160567 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.190413 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.192283 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.192697 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.193181 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v9tsv"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160603 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160634 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160665 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160917 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.160998 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161030 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161068 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161086 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161114 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.161449 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.163989 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.164092 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.179933 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.180213 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.180329 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.180443 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.180532 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.193769 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4a320bf5-a2af-4d11-9656-802d906d46b9-audit-dir\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.195196 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.197044 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a5a342c4-f1fe-4574-b415-294717ae5c7f-trusted-ca\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.200940 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4a320bf5-a2af-4d11-9656-802d906d46b9-etcd-client\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.201934 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.202591 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7vfz"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.203134 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.203633 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b333e1-888e-4515-b954-c8cbfd4af83a-serving-cert\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.204744 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b444j"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.205594 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.206033 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.206191 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.206646 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.206994 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.207883 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.208010 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.208027 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.208119 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.210260 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.210450 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/421d5aab-975e-49c9-8ea4-eef2e635c7f7-metrics-tls\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.222080 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4a320bf5-a2af-4d11-9656-802d906d46b9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.224492 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.225872 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.226908 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.229458 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.229546 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.230656 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qxz5r"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.230843 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.231570 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.232546 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.234148 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-j7hkh"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.234896 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.235168 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.236624 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.237170 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.237368 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.238294 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.241426 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.243251 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.243859 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.243950 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tn44m"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.247336 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.247369 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-msgbv"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.247381 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.249435 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmn44"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.252278 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.252976 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.254305 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qnqlr"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.255851 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4gw2c"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.257396 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.257427 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c9dvb"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.257559 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.258281 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.259499 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v9tsv"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.261004 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262357 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262404 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-config\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262441 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a80192f5-0bea-48df-a5b5-cae9402eb6fe-serving-cert\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262475 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6h5k\" (UniqueName: \"kubernetes.io/projected/a80192f5-0bea-48df-a5b5-cae9402eb6fe-kube-api-access-t6h5k\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262500 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d4c630-42d2-490d-8782-1fdb7723181d-config\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262536 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4d4c630-42d2-490d-8782-1fdb7723181d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262565 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4d4c630-42d2-490d-8782-1fdb7723181d-images\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262598 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vg8f\" (UniqueName: \"kubernetes.io/projected/e4d4c630-42d2-490d-8782-1fdb7723181d-kube-api-access-4vg8f\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.262614 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.263782 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4d4c630-42d2-490d-8782-1fdb7723181d-config\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.263950 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.266752 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e4d4c630-42d2-490d-8782-1fdb7723181d-images\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.267431 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-j7hkh"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.267486 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cpnxt"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.268305 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.269206 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a80192f5-0bea-48df-a5b5-cae9402eb6fe-serving-cert\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.269264 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.269562 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-config\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.269769 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a80192f5-0bea-48df-a5b5-cae9402eb6fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.270053 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.271833 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.272864 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e4d4c630-42d2-490d-8782-1fdb7723181d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.273027 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttw4x"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.273461 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.275312 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pbgwv"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.276416 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.277604 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.278801 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b444j"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.279649 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7vfz"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.281591 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.282449 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.283807 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.286337 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pjs7m"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.287510 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.289085 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kjclh"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.289659 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.291655 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pjs7m"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.293244 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5pfl"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.294428 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.300353 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.303732 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.306100 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.311925 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4gw2c"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.313347 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-76ltz"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.314518 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-76ltz" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.314591 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-76ltz"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.321220 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.342161 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.361935 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.381242 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.421798 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.442620 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.462952 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.482921 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.503498 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.522261 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.584549 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zs2b\" (UniqueName: \"kubernetes.io/projected/4a320bf5-a2af-4d11-9656-802d906d46b9-kube-api-access-5zs2b\") pod \"apiserver-7bbb656c7d-w44rt\" (UID: \"4a320bf5-a2af-4d11-9656-802d906d46b9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.601454 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/421d5aab-975e-49c9-8ea4-eef2e635c7f7-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.629413 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtwgg\" (UniqueName: \"kubernetes.io/projected/a5a342c4-f1fe-4574-b415-294717ae5c7f-kube-api-access-mtwgg\") pod \"console-operator-58897d9998-qxz5r\" (UID: \"a5a342c4-f1fe-4574-b415-294717ae5c7f\") " pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.632307 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.640915 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6sjc\" (UniqueName: \"kubernetes.io/projected/e9b333e1-888e-4515-b954-c8cbfd4af83a-kube-api-access-z6sjc\") pod \"route-controller-manager-6576b87f9c-z8wm5\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.664550 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb84g\" (UniqueName: \"kubernetes.io/projected/b7637684-717f-4bf3-bba2-cd3dec71715d-kube-api-access-mb84g\") pod \"oauth-openshift-558db77b4-tn44m\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.682700 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qm52\" (UniqueName: \"kubernetes.io/projected/421d5aab-975e-49c9-8ea4-eef2e635c7f7-kube-api-access-2qm52\") pod \"ingress-operator-5b745b69d9-nvqqq\" (UID: \"421d5aab-975e-49c9-8ea4-eef2e635c7f7\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.686529 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.691775 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.701919 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.716169 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.722599 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.723437 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.742594 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.762141 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.768090 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.784351 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.801557 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.822372 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.842041 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.857119 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt"] Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.862020 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.885440 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.901539 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.944429 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.956231 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.970415 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 16:59:35 crc kubenswrapper[5049]: I0127 16:59:35.985582 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.002495 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.022225 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.042273 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.076005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddhq7\" (UniqueName: \"kubernetes.io/projected/d64fb1d4-8e82-4fbe-9767-20d358bccb0c-kube-api-access-ddhq7\") pod \"openshift-controller-manager-operator-756b6f6bc6-77q95\" (UID: \"d64fb1d4-8e82-4fbe-9767-20d358bccb0c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.081020 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.101436 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.121969 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.141822 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.161268 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.182618 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.228835 5049 request.go:700] Waited for 1.019828623s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&limit=500&resourceVersion=0 Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.231204 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.231733 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.241961 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.261762 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq"] Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.263282 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.264945 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5"] Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.270849 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.288416 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 16:59:36 crc kubenswrapper[5049]: W0127 16:59:36.294463 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b333e1_888e_4515_b954_c8cbfd4af83a.slice/crio-8dbd6ae9438580a32b8385d6948bbbdc1b418899c77df56e0e054241b2b4e262 WatchSource:0}: Error finding container 8dbd6ae9438580a32b8385d6948bbbdc1b418899c77df56e0e054241b2b4e262: Status 404 returned error can't find the container with id 8dbd6ae9438580a32b8385d6948bbbdc1b418899c77df56e0e054241b2b4e262 Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.301766 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.314795 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qxz5r"] Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.315780 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tn44m"] Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.322542 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: W0127 16:59:36.336196 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a342c4_f1fe_4574_b415_294717ae5c7f.slice/crio-6e31432827040d87d204226466ad7659cbcf31d7f1c874c8affdacc9ef464e1e WatchSource:0}: Error finding container 6e31432827040d87d204226466ad7659cbcf31d7f1c874c8affdacc9ef464e1e: Status 404 returned error can't find the container with id 6e31432827040d87d204226466ad7659cbcf31d7f1c874c8affdacc9ef464e1e Jan 27 16:59:36 crc kubenswrapper[5049]: W0127 16:59:36.336926 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7637684_717f_4bf3_bba2_cd3dec71715d.slice/crio-f6604d4f49486aded8bef91938c42e5e23306c580e5e879893aaaa34fabf648c WatchSource:0}: Error finding container f6604d4f49486aded8bef91938c42e5e23306c580e5e879893aaaa34fabf648c: Status 404 returned error can't find the container with id f6604d4f49486aded8bef91938c42e5e23306c580e5e879893aaaa34fabf648c Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.341551 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.360873 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.391580 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.407990 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.422809 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.442709 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.463446 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.467655 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" event={"ID":"b7637684-717f-4bf3-bba2-cd3dec71715d","Type":"ContainerStarted","Data":"f6604d4f49486aded8bef91938c42e5e23306c580e5e879893aaaa34fabf648c"} Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.470103 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" event={"ID":"421d5aab-975e-49c9-8ea4-eef2e635c7f7","Type":"ContainerStarted","Data":"5a8dbfac63b0ff28d687684b2c7707a144e9baef16623ce4a4f1234a90463726"} Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.472120 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qxz5r" event={"ID":"a5a342c4-f1fe-4574-b415-294717ae5c7f","Type":"ContainerStarted","Data":"6e31432827040d87d204226466ad7659cbcf31d7f1c874c8affdacc9ef464e1e"} Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.473936 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" event={"ID":"e9b333e1-888e-4515-b954-c8cbfd4af83a","Type":"ContainerStarted","Data":"8dbd6ae9438580a32b8385d6948bbbdc1b418899c77df56e0e054241b2b4e262"} Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.474430 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.475961 5049 generic.go:334] "Generic (PLEG): container finished" podID="4a320bf5-a2af-4d11-9656-802d906d46b9" containerID="18fe076c771bcfc3a9295dd652a5d8eb68b99cc0b99f5cbd9ca8675bb653771d" exitCode=0 Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.476004 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" event={"ID":"4a320bf5-a2af-4d11-9656-802d906d46b9","Type":"ContainerDied","Data":"18fe076c771bcfc3a9295dd652a5d8eb68b99cc0b99f5cbd9ca8675bb653771d"} Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.476031 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" event={"ID":"4a320bf5-a2af-4d11-9656-802d906d46b9","Type":"ContainerStarted","Data":"84dce6605188d6e669386a320d8fdfa85c874654bc99b85349a6d97066d7796a"} Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.476709 5049 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z8wm5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.476790 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" podUID="e9b333e1-888e-4515-b954-c8cbfd4af83a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.482526 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.497879 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95"] Jan 27 16:59:36 crc kubenswrapper[5049]: W0127 16:59:36.510022 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd64fb1d4_8e82_4fbe_9767_20d358bccb0c.slice/crio-f91446cfeda473ca17802b8fd2130683b0fadfcb1fd619f05c9ee88cb94c38e3 WatchSource:0}: Error finding container f91446cfeda473ca17802b8fd2130683b0fadfcb1fd619f05c9ee88cb94c38e3: Status 404 returned error can't find the container with id f91446cfeda473ca17802b8fd2130683b0fadfcb1fd619f05c9ee88cb94c38e3 Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.510631 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.521024 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.541957 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.561345 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.582759 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.602074 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.629889 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.641453 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.663103 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.683352 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.702136 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.722729 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.741873 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.761175 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.781858 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.802596 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.822185 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.843125 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.862364 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.882410 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.901451 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.922351 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.942491 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.961989 5049 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 16:59:36 crc kubenswrapper[5049]: I0127 16:59:36.982126 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.002472 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.049117 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6h5k\" (UniqueName: \"kubernetes.io/projected/a80192f5-0bea-48df-a5b5-cae9402eb6fe-kube-api-access-t6h5k\") pod \"authentication-operator-69f744f599-pbgwv\" (UID: \"a80192f5-0bea-48df-a5b5-cae9402eb6fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.062558 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.067863 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vg8f\" (UniqueName: \"kubernetes.io/projected/e4d4c630-42d2-490d-8782-1fdb7723181d-kube-api-access-4vg8f\") pod \"machine-api-operator-5694c8668f-c9dvb\" (UID: \"e4d4c630-42d2-490d-8782-1fdb7723181d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.080378 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.084393 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.101024 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.111652 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.123703 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.143526 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.161924 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.182622 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.214422 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.222207 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.240114 5049 request.go:700] Waited for 1.925207542s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.242587 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.286415 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d232bc4e-f92d-4b11-bab8-f271f05ebba9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287129 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287177 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287232 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d232bc4e-f92d-4b11-bab8-f271f05ebba9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287305 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d232bc4e-f92d-4b11-bab8-f271f05ebba9-config\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287336 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-tls\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287372 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef5b7d0-2d2e-4229-8149-edeb57475be6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287474 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-certificates\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287506 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef5b7d0-2d2e-4229-8149-edeb57475be6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.287601 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:37.787580897 +0000 UTC m=+152.886554466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287653 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-trusted-ca\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287793 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.287960 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2fcf\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-kube-api-access-s2fcf\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.288027 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-bound-sa-token\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.288063 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef5b7d0-2d2e-4229-8149-edeb57475be6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.331804 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-c9dvb"] Jan 27 16:59:37 crc kubenswrapper[5049]: W0127 16:59:37.338767 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4d4c630_42d2_490d_8782_1fdb7723181d.slice/crio-e9d25bcf19ee2eaf25b70eaf81cd1a175809748ddeddcc0dd4d4bd827069f79a WatchSource:0}: Error finding container e9d25bcf19ee2eaf25b70eaf81cd1a175809748ddeddcc0dd4d4bd827069f79a: Status 404 returned error can't find the container with id e9d25bcf19ee2eaf25b70eaf81cd1a175809748ddeddcc0dd4d4bd827069f79a Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.372219 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pbgwv"] Jan 27 16:59:37 crc kubenswrapper[5049]: W0127 16:59:37.381341 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda80192f5_0bea_48df_a5b5_cae9402eb6fe.slice/crio-595c36aaab9c197319f73bf4c69491460657f34b73e133621d071974d5ae55b4 WatchSource:0}: Error finding container 595c36aaab9c197319f73bf4c69491460657f34b73e133621d071974d5ae55b4: Status 404 returned error can't find the container with id 595c36aaab9c197319f73bf4c69491460657f34b73e133621d071974d5ae55b4 Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.390822 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.391025 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:37.890989496 +0000 UTC m=+152.989963055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391117 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-proxy-tls\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391176 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mxvc\" (UniqueName: \"kubernetes.io/projected/86d28008-71fe-4476-80ef-02d4086307b6-kube-api-access-4mxvc\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391253 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391280 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-config\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391336 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-certificates\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391361 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef5b7d0-2d2e-4229-8149-edeb57475be6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391385 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86d28008-71fe-4476-80ef-02d4086307b6-images\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391445 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c83d0f19-d930-4144-9fbc-c581fe082422-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391481 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-config\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391503 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-ca\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391525 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-oauth-config\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391588 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391609 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83d0f19-d930-4144-9fbc-c581fe082422-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391645 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngchp\" (UniqueName: \"kubernetes.io/projected/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-kube-api-access-ngchp\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391739 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.391769 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf5c9\" (UniqueName: \"kubernetes.io/projected/1d175bca-3f73-4ad1-be29-f724a6baee2c-kube-api-access-hf5c9\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.394085 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-trusted-ca\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.394123 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmhx\" (UniqueName: \"kubernetes.io/projected/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-kube-api-access-mzmhx\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.394151 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6tkq\" (UniqueName: \"kubernetes.io/projected/ba18b997-5143-40c5-9309-120e553e337a-kube-api-access-c6tkq\") pod \"dns-operator-744455d44c-gmn44\" (UID: \"ba18b997-5143-40c5-9309-120e553e337a\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.394174 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs46n\" (UniqueName: \"kubernetes.io/projected/b96780a0-72a6-4ee0-ae94-60221d4f0a58-kube-api-access-zs46n\") pod \"downloads-7954f5f757-msgbv\" (UID: \"b96780a0-72a6-4ee0-ae94-60221d4f0a58\") " pod="openshift-console/downloads-7954f5f757-msgbv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.394202 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d1d195b-c8b9-4e9e-a47b-b0187cdd6195-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-frn4s\" (UID: \"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.394227 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-service-ca\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.394272 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/098c2b85-fe69-4df5-9ec3-43a6f25316c0-serving-cert\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.394305 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.395663 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8hq\" (UniqueName: \"kubernetes.io/projected/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-kube-api-access-2d8hq\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.396861 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-bound-sa-token\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.396915 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.396999 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2fcf\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-kube-api-access-s2fcf\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397240 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0092259f-7233-4db1-9ed5-667deb592e96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h8z99\" (UID: \"0092259f-7233-4db1-9ed5-667deb592e96\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397341 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-serving-cert\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397373 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397419 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef5b7d0-2d2e-4229-8149-edeb57475be6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397447 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-oauth-serving-cert\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397492 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d175bca-3f73-4ad1-be29-f724a6baee2c-service-ca-bundle\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397524 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d232bc4e-f92d-4b11-bab8-f271f05ebba9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397552 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5zbr\" (UniqueName: \"kubernetes.io/projected/3d1d195b-c8b9-4e9e-a47b-b0187cdd6195-kube-api-access-k5zbr\") pod \"cluster-samples-operator-665b6dd947-frn4s\" (UID: \"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397582 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-546pq\" (UniqueName: \"kubernetes.io/projected/c83d0f19-d930-4144-9fbc-c581fe082422-kube-api-access-546pq\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397923 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-default-certificate\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397943 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-trusted-ca\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.397985 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-service-ca\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398037 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398069 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398288 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d232bc4e-f92d-4b11-bab8-f271f05ebba9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.398646 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:37.898628786 +0000 UTC m=+152.997602335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398688 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86d28008-71fe-4476-80ef-02d4086307b6-proxy-tls\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398713 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398733 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86zl8\" (UniqueName: \"kubernetes.io/projected/ed96c1d9-55f9-48df-970b-2b1e71a90633-kube-api-access-86zl8\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398761 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d232bc4e-f92d-4b11-bab8-f271f05ebba9-config\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398779 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398801 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-metrics-certs\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398816 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba18b997-5143-40c5-9309-120e553e337a-metrics-tls\") pod \"dns-operator-744455d44c-gmn44\" (UID: \"ba18b997-5143-40c5-9309-120e553e337a\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398835 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-tls\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398850 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-client\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398865 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgtml\" (UniqueName: \"kubernetes.io/projected/098c2b85-fe69-4df5-9ec3-43a6f25316c0-kube-api-access-pgtml\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398881 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-trusted-ca-bundle\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398901 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86d28008-71fe-4476-80ef-02d4086307b6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398919 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef5b7d0-2d2e-4229-8149-edeb57475be6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398938 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-stats-auth\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.398963 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9pzp\" (UniqueName: \"kubernetes.io/projected/0092259f-7233-4db1-9ed5-667deb592e96-kube-api-access-n9pzp\") pod \"control-plane-machine-set-operator-78cbb6b69f-h8z99\" (UID: \"0092259f-7233-4db1-9ed5-667deb592e96\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.399729 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d232bc4e-f92d-4b11-bab8-f271f05ebba9-config\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.400290 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef5b7d0-2d2e-4229-8149-edeb57475be6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.400935 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-certificates\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.405824 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.405838 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef5b7d0-2d2e-4229-8149-edeb57475be6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.407097 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d232bc4e-f92d-4b11-bab8-f271f05ebba9-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.409084 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-tls\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.415598 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef5b7d0-2d2e-4229-8149-edeb57475be6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-6h54c\" (UID: \"3ef5b7d0-2d2e-4229-8149-edeb57475be6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.464806 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-bound-sa-token\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.482865 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2fcf\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-kube-api-access-s2fcf\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.485988 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qxz5r" event={"ID":"a5a342c4-f1fe-4574-b415-294717ae5c7f","Type":"ContainerStarted","Data":"3acb35baa23b1ab5a701d977979cc74b5406836683a280bcf49b630d16d46f75"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.486229 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.487889 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" event={"ID":"a80192f5-0bea-48df-a5b5-cae9402eb6fe","Type":"ContainerStarted","Data":"595c36aaab9c197319f73bf4c69491460657f34b73e133621d071974d5ae55b4"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.488368 5049 patch_prober.go:28] interesting pod/console-operator-58897d9998-qxz5r container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.488426 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qxz5r" podUID="a5a342c4-f1fe-4574-b415-294717ae5c7f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.492804 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" event={"ID":"b7637684-717f-4bf3-bba2-cd3dec71715d","Type":"ContainerStarted","Data":"30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.493093 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.497399 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" event={"ID":"421d5aab-975e-49c9-8ea4-eef2e635c7f7","Type":"ContainerStarted","Data":"e875a55e69f80337372e2138173f9ee995a697db20cafba87cafbe82b7698822"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.497454 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" event={"ID":"421d5aab-975e-49c9-8ea4-eef2e635c7f7","Type":"ContainerStarted","Data":"fd67e4327e4afe14678452f6f78a231b0bf2e67268bbc13accf16cb8cd1120c3"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.498936 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" event={"ID":"d64fb1d4-8e82-4fbe-9767-20d358bccb0c","Type":"ContainerStarted","Data":"af9be91f96a647d4b5f377cce146cb625d5c38ddebbfb9932523ed9cbdd6decf"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.498967 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" event={"ID":"d64fb1d4-8e82-4fbe-9767-20d358bccb0c","Type":"ContainerStarted","Data":"f91446cfeda473ca17802b8fd2130683b0fadfcb1fd619f05c9ee88cb94c38e3"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.499450 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.499730 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:37.999706992 +0000 UTC m=+153.098680541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.499783 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd191306-613f-4c2f-9b3e-e38146dd4400-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6ckbp\" (UID: \"cd191306-613f-4c2f-9b3e-e38146dd4400\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.499811 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-image-import-ca\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.499862 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.499924 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-service-ca\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.499975 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1209cc6-7d3a-4431-80ed-878ad81fbd3d-cert\") pod \"ingress-canary-76ltz\" (UID: \"a1209cc6-7d3a-4431-80ed-878ad81fbd3d\") " pod="openshift-ingress-canary/ingress-canary-76ltz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500122 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8nlm\" (UniqueName: \"kubernetes.io/projected/80c97914-9a11-474e-a4f1-14dde70837cd-kube-api-access-f8nlm\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500185 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.500211 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.000203306 +0000 UTC m=+153.099177085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500235 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d894ce44-2759-40c6-9d2c-f26fa1691f0d-config\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500257 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34cc919b-d826-444d-9748-e3e6704d03cb-auth-proxy-config\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500274 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d508920-c710-4060-b99f-12594f7c1fb4-apiservice-cert\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500326 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-metrics-certs\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500348 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-client\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500373 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86d28008-71fe-4476-80ef-02d4086307b6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500396 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-signing-key\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500417 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-stats-auth\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500443 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-proxy-tls\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500463 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84px8\" (UniqueName: \"kubernetes.io/projected/7e762e88-c00a-49b7-8a84-48c7fe50b602-kube-api-access-84px8\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500486 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500511 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-config\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500534 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500554 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86d28008-71fe-4476-80ef-02d4086307b6-images\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500573 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-csi-data-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500593 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg95d\" (UniqueName: \"kubernetes.io/projected/7de8021c-2a6e-43d0-bd02-b297e5583c52-kube-api-access-sg95d\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500623 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtp6f\" (UniqueName: \"kubernetes.io/projected/770cf38a-9f1f-441a-bf23-4944bd750e24-kube-api-access-xtp6f\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500646 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c83d0f19-d930-4144-9fbc-c581fe082422-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500736 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-certs\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500759 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-config\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500777 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-oauth-config\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500797 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500816 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500866 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlsxx\" (UniqueName: \"kubernetes.io/projected/ed398f74-73d3-4e3b-a7b8-57d283e9adfa-kube-api-access-vlsxx\") pod \"multus-admission-controller-857f4d67dd-v9tsv\" (UID: \"ed398f74-73d3-4e3b-a7b8-57d283e9adfa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.500887 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.501814 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmhx\" (UniqueName: \"kubernetes.io/projected/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-kube-api-access-mzmhx\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.501914 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs46n\" (UniqueName: \"kubernetes.io/projected/b96780a0-72a6-4ee0-ae94-60221d4f0a58-kube-api-access-zs46n\") pod \"downloads-7954f5f757-msgbv\" (UID: \"b96780a0-72a6-4ee0-ae94-60221d4f0a58\") " pod="openshift-console/downloads-7954f5f757-msgbv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.501995 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st79p\" (UniqueName: \"kubernetes.io/projected/cd191306-613f-4c2f-9b3e-e38146dd4400-kube-api-access-st79p\") pod \"package-server-manager-789f6589d5-6ckbp\" (UID: \"cd191306-613f-4c2f-9b3e-e38146dd4400\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502049 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502092 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed398f74-73d3-4e3b-a7b8-57d283e9adfa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v9tsv\" (UID: \"ed398f74-73d3-4e3b-a7b8-57d283e9adfa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502251 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d1d195b-c8b9-4e9e-a47b-b0187cdd6195-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-frn4s\" (UID: \"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502287 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/098c2b85-fe69-4df5-9ec3-43a6f25316c0-serving-cert\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502328 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrh4s\" (UniqueName: \"kubernetes.io/projected/b22f850b-fb2b-4839-8b41-bb2a92059a5c-kube-api-access-wrh4s\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502395 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8hq\" (UniqueName: \"kubernetes.io/projected/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-kube-api-access-2d8hq\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502431 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/34cc919b-d826-444d-9748-e3e6704d03cb-machine-approver-tls\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502464 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22f850b-fb2b-4839-8b41-bb2a92059a5c-config-volume\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502523 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-node-bootstrap-token\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502586 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0092259f-7233-4db1-9ed5-667deb592e96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h8z99\" (UID: \"0092259f-7233-4db1-9ed5-667deb592e96\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502633 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502688 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c8277b4-0db2-4e62-aee2-4009a3afda61-srv-cert\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502721 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4bk\" (UniqueName: \"kubernetes.io/projected/34cc919b-d826-444d-9748-e3e6704d03cb-kube-api-access-mn4bk\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502947 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80c97914-9a11-474e-a4f1-14dde70837cd-srv-cert\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.502976 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed69d5ac-f3d2-42f7-a923-265ad3aad708-config\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503028 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d175bca-3f73-4ad1-be29-f724a6baee2c-service-ca-bundle\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503062 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-oauth-serving-cert\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503097 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cc919b-d826-444d-9748-e3e6704d03cb-config\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503126 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503199 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5zbr\" (UniqueName: \"kubernetes.io/projected/3d1d195b-c8b9-4e9e-a47b-b0187cdd6195-kube-api-access-k5zbr\") pod \"cluster-samples-operator-665b6dd947-frn4s\" (UID: \"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503235 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-default-certificate\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503306 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsq7p\" (UniqueName: \"kubernetes.io/projected/5d508920-c710-4060-b99f-12594f7c1fb4-kube-api-access-tsq7p\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503341 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aeb400e-8352-4de5-baf4-e64073f57d32-secret-volume\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503372 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6gp4\" (UniqueName: \"kubernetes.io/projected/7aeb400e-8352-4de5-baf4-e64073f57d32-kube-api-access-x6gp4\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503408 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwnw\" (UniqueName: \"kubernetes.io/projected/33c5f582-79d8-4ba1-8806-1104540ed6eb-kube-api-access-9dwnw\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503439 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4plxs\" (UniqueName: \"kubernetes.io/projected/ef242174-8534-44ce-bc43-cb6648c594c4-kube-api-access-4plxs\") pod \"migrator-59844c95c7-9vk5m\" (UID: \"ef242174-8534-44ce-bc43-cb6648c594c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503477 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86d28008-71fe-4476-80ef-02d4086307b6-proxy-tls\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503502 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/770cf38a-9f1f-441a-bf23-4944bd750e24-audit-dir\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503582 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86zl8\" (UniqueName: \"kubernetes.io/projected/ed96c1d9-55f9-48df-970b-2b1e71a90633-kube-api-access-86zl8\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503701 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-plugins-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503768 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503800 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba18b997-5143-40c5-9309-120e553e337a-metrics-tls\") pod \"dns-operator-744455d44c-gmn44\" (UID: \"ba18b997-5143-40c5-9309-120e553e337a\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503836 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgtml\" (UniqueName: \"kubernetes.io/projected/098c2b85-fe69-4df5-9ec3-43a6f25316c0-kube-api-access-pgtml\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503859 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-trusted-ca-bundle\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503889 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed69d5ac-f3d2-42f7-a923-265ad3aad708-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503921 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9pzp\" (UniqueName: \"kubernetes.io/projected/0092259f-7233-4db1-9ed5-667deb592e96-kube-api-access-n9pzp\") pod \"control-plane-machine-set-operator-78cbb6b69f-h8z99\" (UID: \"0092259f-7233-4db1-9ed5-667deb592e96\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503953 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9987ba69-abc5-4d37-84aa-a708e31c1586-serving-cert\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.503974 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-etcd-serving-ca\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504006 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" event={"ID":"e9b333e1-888e-4515-b954-c8cbfd4af83a","Type":"ContainerStarted","Data":"1b27b0755746526a903bc9f4e883e62d17ecd19fef04878e0c576252a556c590"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504012 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mxvc\" (UniqueName: \"kubernetes.io/projected/86d28008-71fe-4476-80ef-02d4086307b6-kube-api-access-4mxvc\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504096 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-mountpoint-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504132 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c8277b4-0db2-4e62-aee2-4009a3afda61-profile-collector-cert\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504188 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89v8d\" (UniqueName: \"kubernetes.io/projected/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-kube-api-access-89v8d\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504229 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-signing-cabundle\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504263 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-encryption-config\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504759 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-ca\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504790 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-socket-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504823 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33c5f582-79d8-4ba1-8806-1104540ed6eb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504860 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngchp\" (UniqueName: \"kubernetes.io/projected/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-kube-api-access-ngchp\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504897 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83d0f19-d930-4144-9fbc-c581fe082422-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504930 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed69d5ac-f3d2-42f7-a923-265ad3aad708-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504969 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf5c9\" (UniqueName: \"kubernetes.io/projected/1d175bca-3f73-4ad1-be29-f724a6baee2c-kube-api-access-hf5c9\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505003 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6tkq\" (UniqueName: \"kubernetes.io/projected/ba18b997-5143-40c5-9309-120e553e337a-kube-api-access-c6tkq\") pod \"dns-operator-744455d44c-gmn44\" (UID: \"ba18b997-5143-40c5-9309-120e553e337a\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505037 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c5f582-79d8-4ba1-8806-1104540ed6eb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505072 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsd6t\" (UniqueName: \"kubernetes.io/projected/9987ba69-abc5-4d37-84aa-a708e31c1586-kube-api-access-bsd6t\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505104 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5w84\" (UniqueName: \"kubernetes.io/projected/a1209cc6-7d3a-4431-80ed-878ad81fbd3d-kube-api-access-r5w84\") pod \"ingress-canary-76ltz\" (UID: \"a1209cc6-7d3a-4431-80ed-878ad81fbd3d\") " pod="openshift-ingress-canary/ingress-canary-76ltz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505138 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8dh5\" (UniqueName: \"kubernetes.io/projected/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-kube-api-access-x8dh5\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505167 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d894ce44-2759-40c6-9d2c-f26fa1691f0d-serving-cert\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505221 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-registration-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505254 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-client-ca\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.505511 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5d508920-c710-4060-b99f-12594f7c1fb4-tmpfs\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.507341 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ba18b997-5143-40c5-9309-120e553e337a-metrics-tls\") pod \"dns-operator-744455d44c-gmn44\" (UID: \"ba18b997-5143-40c5-9309-120e553e337a\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.509159 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.504694 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-service-ca\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.516506 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/86d28008-71fe-4476-80ef-02d4086307b6-images\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.517477 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-ca\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.519346 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-config\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.519622 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c83d0f19-d930-4144-9fbc-c581fe082422-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.520260 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86d28008-71fe-4476-80ef-02d4086307b6-proxy-tls\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.521715 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d232bc4e-f92d-4b11-bab8-f271f05ebba9-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vl66d\" (UID: \"d232bc4e-f92d-4b11-bab8-f271f05ebba9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.522315 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c83d0f19-d930-4144-9fbc-c581fe082422-serving-cert\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.522525 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-metrics-certs\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.523095 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d175bca-3f73-4ad1-be29-f724a6baee2c-service-ca-bundle\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.523853 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d1d195b-c8b9-4e9e-a47b-b0187cdd6195-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-frn4s\" (UID: \"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.524124 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.524224 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-oauth-config\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.524531 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" event={"ID":"4a320bf5-a2af-4d11-9656-802d906d46b9","Type":"ContainerStarted","Data":"caad170253e2a910118d76ad06eb038fada64b2b27ce44aeda328f20940d58b9"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.525161 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-config\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.526211 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-oauth-serving-cert\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.526604 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86d28008-71fe-4476-80ef-02d4086307b6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.527607 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-trusted-ca-bundle\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.529485 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-default-certificate\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.529704 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.531374 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" event={"ID":"e4d4c630-42d2-490d-8782-1fdb7723181d","Type":"ContainerStarted","Data":"e9d25bcf19ee2eaf25b70eaf81cd1a175809748ddeddcc0dd4d4bd827069f79a"} Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.532528 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/1d175bca-3f73-4ad1-be29-f724a6baee2c-stats-auth\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.532752 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533051 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znllk\" (UniqueName: \"kubernetes.io/projected/d894ce44-2759-40c6-9d2c-f26fa1691f0d-kube-api-access-znllk\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533140 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-service-ca\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533169 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-config\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533199 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aeb400e-8352-4de5-baf4-e64073f57d32-config-volume\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533234 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80c97914-9a11-474e-a4f1-14dde70837cd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533289 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/770cf38a-9f1f-441a-bf23-4944bd750e24-node-pullsecrets\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533349 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2f4n\" (UniqueName: \"kubernetes.io/projected/5c8277b4-0db2-4e62-aee2-4009a3afda61-kube-api-access-m2f4n\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533410 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-serving-cert\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533632 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-client\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533716 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-audit\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533774 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d508920-c710-4060-b99f-12594f7c1fb4-webhook-cert\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533808 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/098c2b85-fe69-4df5-9ec3-43a6f25316c0-etcd-service-ca\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533817 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b22f850b-fb2b-4839-8b41-bb2a92059a5c-metrics-tls\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533861 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-config\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533906 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-etcd-client\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533954 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-serving-cert\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.533987 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-546pq\" (UniqueName: \"kubernetes.io/projected/c83d0f19-d930-4144-9fbc-c581fe082422-kube-api-access-546pq\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.534369 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/098c2b85-fe69-4df5-9ec3-43a6f25316c0-serving-cert\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.534429 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.535581 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0092259f-7233-4db1-9ed5-667deb592e96-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h8z99\" (UID: \"0092259f-7233-4db1-9ed5-667deb592e96\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.536928 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-proxy-tls\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.539068 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-serving-cert\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.543876 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86zl8\" (UniqueName: \"kubernetes.io/projected/ed96c1d9-55f9-48df-970b-2b1e71a90633-kube-api-access-86zl8\") pod \"console-f9d7485db-qnqlr\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.559763 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mxvc\" (UniqueName: \"kubernetes.io/projected/86d28008-71fe-4476-80ef-02d4086307b6-kube-api-access-4mxvc\") pod \"machine-config-operator-74547568cd-sx4ts\" (UID: \"86d28008-71fe-4476-80ef-02d4086307b6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.589244 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.604302 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.607501 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmhx\" (UniqueName: \"kubernetes.io/projected/d4e2cc58-fcbb-4e4c-84ec-762d66c13313-kube-api-access-mzmhx\") pod \"cluster-image-registry-operator-dc59b4c8b-59dkc\" (UID: \"d4e2cc58-fcbb-4e4c-84ec-762d66c13313\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.623935 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.626427 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs46n\" (UniqueName: \"kubernetes.io/projected/b96780a0-72a6-4ee0-ae94-60221d4f0a58-kube-api-access-zs46n\") pod \"downloads-7954f5f757-msgbv\" (UID: \"b96780a0-72a6-4ee0-ae94-60221d4f0a58\") " pod="openshift-console/downloads-7954f5f757-msgbv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.634951 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.635087 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.135056825 +0000 UTC m=+153.234030374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635136 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-mountpoint-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635158 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c8277b4-0db2-4e62-aee2-4009a3afda61-profile-collector-cert\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635192 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89v8d\" (UniqueName: \"kubernetes.io/projected/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-kube-api-access-89v8d\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635209 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-signing-cabundle\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635238 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-encryption-config\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635253 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-socket-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635295 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33c5f582-79d8-4ba1-8806-1104540ed6eb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635322 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed69d5ac-f3d2-42f7-a923-265ad3aad708-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635355 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c5f582-79d8-4ba1-8806-1104540ed6eb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635371 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsd6t\" (UniqueName: \"kubernetes.io/projected/9987ba69-abc5-4d37-84aa-a708e31c1586-kube-api-access-bsd6t\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635390 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5w84\" (UniqueName: \"kubernetes.io/projected/a1209cc6-7d3a-4431-80ed-878ad81fbd3d-kube-api-access-r5w84\") pod \"ingress-canary-76ltz\" (UID: \"a1209cc6-7d3a-4431-80ed-878ad81fbd3d\") " pod="openshift-ingress-canary/ingress-canary-76ltz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635405 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d894ce44-2759-40c6-9d2c-f26fa1691f0d-serving-cert\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635428 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-registration-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635442 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-client-ca\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635457 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8dh5\" (UniqueName: \"kubernetes.io/projected/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-kube-api-access-x8dh5\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635491 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znllk\" (UniqueName: \"kubernetes.io/projected/d894ce44-2759-40c6-9d2c-f26fa1691f0d-kube-api-access-znllk\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635516 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-config\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635533 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5d508920-c710-4060-b99f-12594f7c1fb4-tmpfs\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635549 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aeb400e-8352-4de5-baf4-e64073f57d32-config-volume\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635568 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80c97914-9a11-474e-a4f1-14dde70837cd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635584 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/770cf38a-9f1f-441a-bf23-4944bd750e24-node-pullsecrets\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635600 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2f4n\" (UniqueName: \"kubernetes.io/projected/5c8277b4-0db2-4e62-aee2-4009a3afda61-kube-api-access-m2f4n\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635627 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d508920-c710-4060-b99f-12594f7c1fb4-webhook-cert\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635645 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b22f850b-fb2b-4839-8b41-bb2a92059a5c-metrics-tls\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635658 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-audit\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635689 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-config\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635704 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-etcd-client\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635721 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-serving-cert\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635751 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd191306-613f-4c2f-9b3e-e38146dd4400-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6ckbp\" (UID: \"cd191306-613f-4c2f-9b3e-e38146dd4400\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635767 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-image-import-ca\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635798 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635815 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1209cc6-7d3a-4431-80ed-878ad81fbd3d-cert\") pod \"ingress-canary-76ltz\" (UID: \"a1209cc6-7d3a-4431-80ed-878ad81fbd3d\") " pod="openshift-ingress-canary/ingress-canary-76ltz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635849 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8nlm\" (UniqueName: \"kubernetes.io/projected/80c97914-9a11-474e-a4f1-14dde70837cd-kube-api-access-f8nlm\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635865 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d894ce44-2759-40c6-9d2c-f26fa1691f0d-config\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635880 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d508920-c710-4060-b99f-12594f7c1fb4-apiservice-cert\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635907 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34cc919b-d826-444d-9748-e3e6704d03cb-auth-proxy-config\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635924 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-signing-key\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635968 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84px8\" (UniqueName: \"kubernetes.io/projected/7e762e88-c00a-49b7-8a84-48c7fe50b602-kube-api-access-84px8\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.635984 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636000 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-csi-data-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636017 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg95d\" (UniqueName: \"kubernetes.io/projected/7de8021c-2a6e-43d0-bd02-b297e5583c52-kube-api-access-sg95d\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636042 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-certs\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636058 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtp6f\" (UniqueName: \"kubernetes.io/projected/770cf38a-9f1f-441a-bf23-4944bd750e24-kube-api-access-xtp6f\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636086 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636113 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlsxx\" (UniqueName: \"kubernetes.io/projected/ed398f74-73d3-4e3b-a7b8-57d283e9adfa-kube-api-access-vlsxx\") pod \"multus-admission-controller-857f4d67dd-v9tsv\" (UID: \"ed398f74-73d3-4e3b-a7b8-57d283e9adfa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636155 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st79p\" (UniqueName: \"kubernetes.io/projected/cd191306-613f-4c2f-9b3e-e38146dd4400-kube-api-access-st79p\") pod \"package-server-manager-789f6589d5-6ckbp\" (UID: \"cd191306-613f-4c2f-9b3e-e38146dd4400\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636170 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636197 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed398f74-73d3-4e3b-a7b8-57d283e9adfa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v9tsv\" (UID: \"ed398f74-73d3-4e3b-a7b8-57d283e9adfa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636221 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrh4s\" (UniqueName: \"kubernetes.io/projected/b22f850b-fb2b-4839-8b41-bb2a92059a5c-kube-api-access-wrh4s\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636277 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/34cc919b-d826-444d-9748-e3e6704d03cb-machine-approver-tls\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636294 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-node-bootstrap-token\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636327 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22f850b-fb2b-4839-8b41-bb2a92059a5c-config-volume\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636371 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c8277b4-0db2-4e62-aee2-4009a3afda61-srv-cert\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636387 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn4bk\" (UniqueName: \"kubernetes.io/projected/34cc919b-d826-444d-9748-e3e6704d03cb-kube-api-access-mn4bk\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636403 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80c97914-9a11-474e-a4f1-14dde70837cd-srv-cert\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636428 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed69d5ac-f3d2-42f7-a923-265ad3aad708-config\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636463 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636487 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cc919b-d826-444d-9748-e3e6704d03cb-config\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636502 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsq7p\" (UniqueName: \"kubernetes.io/projected/5d508920-c710-4060-b99f-12594f7c1fb4-kube-api-access-tsq7p\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636519 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aeb400e-8352-4de5-baf4-e64073f57d32-secret-volume\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636545 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwnw\" (UniqueName: \"kubernetes.io/projected/33c5f582-79d8-4ba1-8806-1104540ed6eb-kube-api-access-9dwnw\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636561 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4plxs\" (UniqueName: \"kubernetes.io/projected/ef242174-8534-44ce-bc43-cb6648c594c4-kube-api-access-4plxs\") pod \"migrator-59844c95c7-9vk5m\" (UID: \"ef242174-8534-44ce-bc43-cb6648c594c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636577 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6gp4\" (UniqueName: \"kubernetes.io/projected/7aeb400e-8352-4de5-baf4-e64073f57d32-kube-api-access-x6gp4\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636592 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/770cf38a-9f1f-441a-bf23-4944bd750e24-audit-dir\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636640 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-plugins-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636662 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed69d5ac-f3d2-42f7-a923-265ad3aad708-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636716 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9987ba69-abc5-4d37-84aa-a708e31c1586-serving-cert\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.636732 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-etcd-serving-ca\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.637269 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-etcd-serving-ca\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.639456 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.639621 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-signing-key\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.639687 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-csi-data-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.639691 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-mountpoint-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.641890 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.642319 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgtml\" (UniqueName: \"kubernetes.io/projected/098c2b85-fe69-4df5-9ec3-43a6f25316c0-kube-api-access-pgtml\") pod \"etcd-operator-b45778765-cpnxt\" (UID: \"098c2b85-fe69-4df5-9ec3-43a6f25316c0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.643396 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c8277b4-0db2-4e62-aee2-4009a3afda61-profile-collector-cert\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.644462 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aeb400e-8352-4de5-baf4-e64073f57d32-config-volume\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.646181 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-signing-cabundle\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.646400 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-registration-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.648834 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-client-ca\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.649175 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d894ce44-2759-40c6-9d2c-f26fa1691f0d-serving-cert\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.649342 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-socket-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.650644 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/80c97914-9a11-474e-a4f1-14dde70837cd-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.651802 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/770cf38a-9f1f-441a-bf23-4944bd750e24-audit-dir\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.652368 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/770cf38a-9f1f-441a-bf23-4944bd750e24-node-pullsecrets\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.652433 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7de8021c-2a6e-43d0-bd02-b297e5583c52-plugins-dir\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.653884 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5d508920-c710-4060-b99f-12594f7c1fb4-tmpfs\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.654037 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-image-import-ca\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.654605 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aeb400e-8352-4de5-baf4-e64073f57d32-secret-volume\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.655092 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-audit\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.656297 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-encryption-config\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.656476 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ed398f74-73d3-4e3b-a7b8-57d283e9adfa-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v9tsv\" (UID: \"ed398f74-73d3-4e3b-a7b8-57d283e9adfa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.656887 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33c5f582-79d8-4ba1-8806-1104540ed6eb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.657156 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd191306-613f-4c2f-9b3e-e38146dd4400-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6ckbp\" (UID: \"cd191306-613f-4c2f-9b3e-e38146dd4400\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.657165 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.157149813 +0000 UTC m=+153.256123362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.658177 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed69d5ac-f3d2-42f7-a923-265ad3aad708-config\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.660911 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/34cc919b-d826-444d-9748-e3e6704d03cb-machine-approver-tls\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.661079 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-trusted-ca-bundle\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.661106 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-config\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.661450 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/770cf38a-9f1f-441a-bf23-4944bd750e24-config\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.662024 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.662521 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.664208 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-node-bootstrap-token\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.664301 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cc919b-d826-444d-9748-e3e6704d03cb-config\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.665124 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b22f850b-fb2b-4839-8b41-bb2a92059a5c-metrics-tls\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.665123 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33c5f582-79d8-4ba1-8806-1104540ed6eb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.665639 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d894ce44-2759-40c6-9d2c-f26fa1691f0d-config\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.666148 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b22f850b-fb2b-4839-8b41-bb2a92059a5c-config-volume\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.666330 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed69d5ac-f3d2-42f7-a923-265ad3aad708-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.666781 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34cc919b-d826-444d-9748-e3e6704d03cb-auth-proxy-config\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.666885 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-etcd-client\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.667157 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9987ba69-abc5-4d37-84aa-a708e31c1586-serving-cert\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.668015 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5d508920-c710-4060-b99f-12594f7c1fb4-apiservice-cert\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.668164 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5d508920-c710-4060-b99f-12594f7c1fb4-webhook-cert\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.668112 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.669256 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c8277b4-0db2-4e62-aee2-4009a3afda61-srv-cert\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.669791 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-certs\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.670089 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/80c97914-9a11-474e-a4f1-14dde70837cd-srv-cert\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.672022 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/770cf38a-9f1f-441a-bf23-4944bd750e24-serving-cert\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.701349 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1209cc6-7d3a-4431-80ed-878ad81fbd3d-cert\") pod \"ingress-canary-76ltz\" (UID: \"a1209cc6-7d3a-4431-80ed-878ad81fbd3d\") " pod="openshift-ingress-canary/ingress-canary-76ltz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.701535 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6tkq\" (UniqueName: \"kubernetes.io/projected/ba18b997-5143-40c5-9309-120e553e337a-kube-api-access-c6tkq\") pod \"dns-operator-744455d44c-gmn44\" (UID: \"ba18b997-5143-40c5-9309-120e553e337a\") " pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.703789 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngchp\" (UniqueName: \"kubernetes.io/projected/c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46-kube-api-access-ngchp\") pod \"machine-config-controller-84d6567774-2xkjk\" (UID: \"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.719063 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.732540 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9pzp\" (UniqueName: \"kubernetes.io/projected/0092259f-7233-4db1-9ed5-667deb592e96-kube-api-access-n9pzp\") pod \"control-plane-machine-set-operator-78cbb6b69f-h8z99\" (UID: \"0092259f-7233-4db1-9ed5-667deb592e96\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.736037 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-msgbv" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.736492 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf5c9\" (UniqueName: \"kubernetes.io/projected/1d175bca-3f73-4ad1-be29-f724a6baee2c-kube-api-access-hf5c9\") pod \"router-default-5444994796-q5xl9\" (UID: \"1d175bca-3f73-4ad1-be29-f724a6baee2c\") " pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.737563 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.738073 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.238055335 +0000 UTC m=+153.337028884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.749932 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.764105 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.766281 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5zbr\" (UniqueName: \"kubernetes.io/projected/3d1d195b-c8b9-4e9e-a47b-b0187cdd6195-kube-api-access-k5zbr\") pod \"cluster-samples-operator-665b6dd947-frn4s\" (UID: \"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.767204 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.781049 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8hq\" (UniqueName: \"kubernetes.io/projected/02b9d72e-e939-4cd7-9c2d-17fae6c25c4c-kube-api-access-2d8hq\") pod \"openshift-apiserver-operator-796bbdcf4f-cm47v\" (UID: \"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.804798 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-546pq\" (UniqueName: \"kubernetes.io/projected/c83d0f19-d930-4144-9fbc-c581fe082422-kube-api-access-546pq\") pod \"openshift-config-operator-7777fb866f-6jbnp\" (UID: \"c83d0f19-d930-4144-9fbc-c581fe082422\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.839831 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84px8\" (UniqueName: \"kubernetes.io/projected/7e762e88-c00a-49b7-8a84-48c7fe50b602-kube-api-access-84px8\") pod \"marketplace-operator-79b997595-q7vfz\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.839872 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.840229 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.340213563 +0000 UTC m=+153.439187112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.841398 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg95d\" (UniqueName: \"kubernetes.io/projected/7de8021c-2a6e-43d0-bd02-b297e5583c52-kube-api-access-sg95d\") pod \"csi-hostpathplugin-4gw2c\" (UID: \"7de8021c-2a6e-43d0-bd02-b297e5583c52\") " pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.877372 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtp6f\" (UniqueName: \"kubernetes.io/projected/770cf38a-9f1f-441a-bf23-4944bd750e24-kube-api-access-xtp6f\") pod \"apiserver-76f77b778f-b444j\" (UID: \"770cf38a-9f1f-441a-bf23-4944bd750e24\") " pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.907474 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.918245 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsd6t\" (UniqueName: \"kubernetes.io/projected/9987ba69-abc5-4d37-84aa-a708e31c1586-kube-api-access-bsd6t\") pod \"controller-manager-879f6c89f-c5pfl\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.925937 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89v8d\" (UniqueName: \"kubernetes.io/projected/3a79954c-d75b-4e08-b5f3-ffb5783d8ac7-kube-api-access-89v8d\") pod \"service-ca-9c57cc56f-j7hkh\" (UID: \"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.942287 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.942944 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.442398784 +0000 UTC m=+153.541372333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.943195 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.944995 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5w84\" (UniqueName: \"kubernetes.io/projected/a1209cc6-7d3a-4431-80ed-878ad81fbd3d-kube-api-access-r5w84\") pod \"ingress-canary-76ltz\" (UID: \"a1209cc6-7d3a-4431-80ed-878ad81fbd3d\") " pod="openshift-ingress-canary/ingress-canary-76ltz" Jan 27 16:59:37 crc kubenswrapper[5049]: E0127 16:59:37.945896 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.445887142 +0000 UTC m=+153.544860691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.955467 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.961575 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8dh5\" (UniqueName: \"kubernetes.io/projected/ae3ce8f3-8ec9-4430-bd93-8ae068f1af28-kube-api-access-x8dh5\") pod \"machine-config-server-kjclh\" (UID: \"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28\") " pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.974688 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.983292 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znllk\" (UniqueName: \"kubernetes.io/projected/d894ce44-2759-40c6-9d2c-f26fa1691f0d-kube-api-access-znllk\") pod \"service-ca-operator-777779d784-jdb9j\" (UID: \"d894ce44-2759-40c6-9d2c-f26fa1691f0d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:37 crc kubenswrapper[5049]: I0127 16:59:37.991703 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.007071 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.010808 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwnw\" (UniqueName: \"kubernetes.io/projected/33c5f582-79d8-4ba1-8806-1104540ed6eb-kube-api-access-9dwnw\") pod \"kube-storage-version-migrator-operator-b67b599dd-kc8qf\" (UID: \"33c5f582-79d8-4ba1-8806-1104540ed6eb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.013594 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.030021 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.035239 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4plxs\" (UniqueName: \"kubernetes.io/projected/ef242174-8534-44ce-bc43-cb6648c594c4-kube-api-access-4plxs\") pod \"migrator-59844c95c7-9vk5m\" (UID: \"ef242174-8534-44ce-bc43-cb6648c594c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.039513 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6gp4\" (UniqueName: \"kubernetes.io/projected/7aeb400e-8352-4de5-baf4-e64073f57d32-kube-api-access-x6gp4\") pod \"collect-profiles-29492205-mdgtc\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.041551 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.051146 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.051313 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.551282486 +0000 UTC m=+153.650256035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.051481 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.051940 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.551913167 +0000 UTC m=+153.650886716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.080530 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.083052 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.083069 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlsxx\" (UniqueName: \"kubernetes.io/projected/ed398f74-73d3-4e3b-a7b8-57d283e9adfa-kube-api-access-vlsxx\") pod \"multus-admission-controller-857f4d67dd-v9tsv\" (UID: \"ed398f74-73d3-4e3b-a7b8-57d283e9adfa\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.087505 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2f4n\" (UniqueName: \"kubernetes.io/projected/5c8277b4-0db2-4e62-aee2-4009a3afda61-kube-api-access-m2f4n\") pod \"catalog-operator-68c6474976-sx7x4\" (UID: \"5c8277b4-0db2-4e62-aee2-4009a3afda61\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.089497 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.109234 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.111277 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st79p\" (UniqueName: \"kubernetes.io/projected/cd191306-613f-4c2f-9b3e-e38146dd4400-kube-api-access-st79p\") pod \"package-server-manager-789f6589d5-6ckbp\" (UID: \"cd191306-613f-4c2f-9b3e-e38146dd4400\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.115275 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.119310 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrh4s\" (UniqueName: \"kubernetes.io/projected/b22f850b-fb2b-4839-8b41-bb2a92059a5c-kube-api-access-wrh4s\") pod \"dns-default-pjs7m\" (UID: \"b22f850b-fb2b-4839-8b41-bb2a92059a5c\") " pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.122462 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.139551 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed69d5ac-f3d2-42f7-a923-265ad3aad708-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8p7nq\" (UID: \"ed69d5ac-f3d2-42f7-a923-265ad3aad708\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.140227 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.143196 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.150269 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.152892 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.153308 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.653283137 +0000 UTC m=+153.752256686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.160191 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.166647 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.172055 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsq7p\" (UniqueName: \"kubernetes.io/projected/5d508920-c710-4060-b99f-12594f7c1fb4-kube-api-access-tsq7p\") pod \"packageserver-d55dfcdfc-s2n89\" (UID: \"5d508920-c710-4060-b99f-12594f7c1fb4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.174363 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.181153 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.181444 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8nlm\" (UniqueName: \"kubernetes.io/projected/80c97914-9a11-474e-a4f1-14dde70837cd-kube-api-access-f8nlm\") pod \"olm-operator-6b444d44fb-hj689\" (UID: \"80c97914-9a11-474e-a4f1-14dde70837cd\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.191564 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.197806 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn4bk\" (UniqueName: \"kubernetes.io/projected/34cc919b-d826-444d-9748-e3e6704d03cb-kube-api-access-mn4bk\") pod \"machine-approver-56656f9798-9sm29\" (UID: \"34cc919b-d826-444d-9748-e3e6704d03cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.200620 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts"] Jan 27 16:59:38 crc kubenswrapper[5049]: W0127 16:59:38.214252 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4e2cc58_fcbb_4e4c_84ec_762d66c13313.slice/crio-38bcb47ef261f987ccf0de6e7f8e2cb58d38d0e657591ba7f61a7f52b147e7dc WatchSource:0}: Error finding container 38bcb47ef261f987ccf0de6e7f8e2cb58d38d0e657591ba7f61a7f52b147e7dc: Status 404 returned error can't find the container with id 38bcb47ef261f987ccf0de6e7f8e2cb58d38d0e657591ba7f61a7f52b147e7dc Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.215827 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.241324 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-76ltz" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.243873 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.247755 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-msgbv"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.256003 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kjclh" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.259326 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.259907 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.759889681 +0000 UTC m=+153.858863230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: W0127 16:59:38.357991 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb96780a0_72a6_4ee0_ae94_60221d4f0a58.slice/crio-1a73a3643085843ce4b30ae289e534f3c604fb2a06371f5cf050271e2b498b2e WatchSource:0}: Error finding container 1a73a3643085843ce4b30ae289e534f3c604fb2a06371f5cf050271e2b498b2e: Status 404 returned error can't find the container with id 1a73a3643085843ce4b30ae289e534f3c604fb2a06371f5cf050271e2b498b2e Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.360453 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.360684 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.860649862 +0000 UTC m=+153.959623411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.362785 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.363133 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.863121472 +0000 UTC m=+153.962095021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.398181 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.406613 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.472978 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.473640 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.474023 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:38.973999452 +0000 UTC m=+154.072973001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.516492 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qxz5r" podStartSLOduration=131.516477216 podStartE2EDuration="2m11.516477216s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:38.513750594 +0000 UTC m=+153.612724163" watchObservedRunningTime="2026-01-27 16:59:38.516477216 +0000 UTC m=+153.615450765" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.517252 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-4gw2c"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.565209 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" podStartSLOduration=132.565190831 podStartE2EDuration="2m12.565190831s" podCreationTimestamp="2026-01-27 16:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:38.562028038 +0000 UTC m=+153.661001587" watchObservedRunningTime="2026-01-27 16:59:38.565190831 +0000 UTC m=+153.664164380" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.576513 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.576824 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.076812483 +0000 UTC m=+154.175786032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.609763 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" event={"ID":"e4d4c630-42d2-490d-8782-1fdb7723181d","Type":"ContainerStarted","Data":"2c89fd12c0520e0a291f6cd0c5db802a296db3a8a33d78ae72fa1dc0f432f0cb"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.609804 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" event={"ID":"e4d4c630-42d2-490d-8782-1fdb7723181d","Type":"ContainerStarted","Data":"adce271c22a40232a15cc292c8429f76d9dc11a3a08b48afbe56a08cd82ce741"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.618450 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" event={"ID":"d4e2cc58-fcbb-4e4c-84ec-762d66c13313","Type":"ContainerStarted","Data":"38bcb47ef261f987ccf0de6e7f8e2cb58d38d0e657591ba7f61a7f52b147e7dc"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.627447 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cpnxt"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.655980 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-msgbv" event={"ID":"b96780a0-72a6-4ee0-ae94-60221d4f0a58","Type":"ContainerStarted","Data":"1a73a3643085843ce4b30ae289e534f3c604fb2a06371f5cf050271e2b498b2e"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.659699 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.665105 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" event={"ID":"86d28008-71fe-4476-80ef-02d4086307b6","Type":"ContainerStarted","Data":"2660268a270f411d420e4be8ea5e77f07471c66111b60a3139adf6955e4bf854"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.666430 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" event={"ID":"d232bc4e-f92d-4b11-bab8-f271f05ebba9","Type":"ContainerStarted","Data":"ace69ea4550b5e1d9e220a0793fd8c02ff0e87bd34bdfe9bb5f12b48e5d93b94"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.668419 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" event={"ID":"a80192f5-0bea-48df-a5b5-cae9402eb6fe","Type":"ContainerStarted","Data":"7078378a48f994f9a00ad63a28a3f79666f54bc7e4ab97858fbd2a4695583a1c"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.678209 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.679295 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.179278046 +0000 UTC m=+154.278251585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.681120 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" event={"ID":"3ef5b7d0-2d2e-4229-8149-edeb57475be6","Type":"ContainerStarted","Data":"5744e17eb127b33a85a85588c44fe7b0fd7a712d47e5acaf5dde530bd9691183"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.724526 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-q5xl9" event={"ID":"1d175bca-3f73-4ad1-be29-f724a6baee2c","Type":"ContainerStarted","Data":"29a3113cbbf1b3f0a070860090f9962b662ee3e48a4a67d281a598b0d93cfa6e"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.724587 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-q5xl9" event={"ID":"1d175bca-3f73-4ad1-be29-f724a6baee2c","Type":"ContainerStarted","Data":"08c09dc783f5ae651359794c7b465b2a06d5fc43545a42f84fc64f0851d990fb"} Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.739045 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qxz5r" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.768379 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.771618 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.780362 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.788315 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.288296626 +0000 UTC m=+154.387270175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.810440 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.858527 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gmn44"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.867709 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qnqlr"] Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.890326 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.890660 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.390631974 +0000 UTC m=+154.489605533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.890986 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.891359 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.391349428 +0000 UTC m=+154.490322977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.932104 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:38 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:38 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:38 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.932177 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:38 crc kubenswrapper[5049]: W0127 16:59:38.975908 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0092259f_7233_4db1_9ed5_667deb592e96.slice/crio-a02c8a80fe83f08150e5d7c9fb5370ebf8c36a71c3dafbaefeab250ef618ac6a WatchSource:0}: Error finding container a02c8a80fe83f08150e5d7c9fb5370ebf8c36a71c3dafbaefeab250ef618ac6a: Status 404 returned error can't find the container with id a02c8a80fe83f08150e5d7c9fb5370ebf8c36a71c3dafbaefeab250ef618ac6a Jan 27 16:59:38 crc kubenswrapper[5049]: I0127 16:59:38.993457 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:38 crc kubenswrapper[5049]: E0127 16:59:38.993865 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.493847624 +0000 UTC m=+154.592821173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: W0127 16:59:39.004798 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod098c2b85_fe69_4df5_9ec3_43a6f25316c0.slice/crio-9eb778d2abe5d04c9bea5d3a4289421e55da614fc924933b45c50d9255d866ea WatchSource:0}: Error finding container 9eb778d2abe5d04c9bea5d3a4289421e55da614fc924933b45c50d9255d866ea: Status 404 returned error can't find the container with id 9eb778d2abe5d04c9bea5d3a4289421e55da614fc924933b45c50d9255d866ea Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.095359 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.096120 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.596103997 +0000 UTC m=+154.695077546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.116357 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" podStartSLOduration=132.116336165 podStartE2EDuration="2m12.116336165s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:39.094342462 +0000 UTC m=+154.193316021" watchObservedRunningTime="2026-01-27 16:59:39.116336165 +0000 UTC m=+154.215309714" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.118858 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-77q95" podStartSLOduration=132.118849327 podStartE2EDuration="2m12.118849327s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:39.114922837 +0000 UTC m=+154.213896386" watchObservedRunningTime="2026-01-27 16:59:39.118849327 +0000 UTC m=+154.217822876" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.197792 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.198290 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.698242165 +0000 UTC m=+154.797215714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.198363 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.198721 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.698664105 +0000 UTC m=+154.797637654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.299249 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.299416 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.799393145 +0000 UTC m=+154.898366694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.300049 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.300353 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.800339811 +0000 UTC m=+154.899313360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.406659 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.407041 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:39.907024798 +0000 UTC m=+155.005998347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.423030 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nvqqq" podStartSLOduration=132.423004671 podStartE2EDuration="2m12.423004671s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:39.418751495 +0000 UTC m=+154.517725054" watchObservedRunningTime="2026-01-27 16:59:39.423004671 +0000 UTC m=+154.521978220" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.508583 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.509026 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.009010319 +0000 UTC m=+155.107983868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.528568 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" podStartSLOduration=132.528542393 podStartE2EDuration="2m12.528542393s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:39.521815558 +0000 UTC m=+154.620789097" watchObservedRunningTime="2026-01-27 16:59:39.528542393 +0000 UTC m=+154.627515942" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.610519 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.610855 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.110822491 +0000 UTC m=+155.209796040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.611336 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.611784 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.111765116 +0000 UTC m=+155.210738665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.717664 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.717815 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.217781632 +0000 UTC m=+155.316755181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.717887 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.719073 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.219063144 +0000 UTC m=+155.318036693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.768979 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" event={"ID":"34cc919b-d826-444d-9748-e3e6704d03cb","Type":"ContainerStarted","Data":"c13e7138a966ade0aa8fc240e2ecfc52b5f466028c2430688872555a92d99d67"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.769038 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" event={"ID":"ba18b997-5143-40c5-9309-120e553e337a","Type":"ContainerStarted","Data":"d54a9a32c65ef4faada17746a59f6688e959f29eee915b4182c79f8578fa1366"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.773462 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" event={"ID":"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195","Type":"ContainerStarted","Data":"90a692465444c7ed38b156a29b8adcb7f08ea66b177a3a1a9eea0f9f8e5ad09b"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.775646 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" event={"ID":"3ef5b7d0-2d2e-4229-8149-edeb57475be6","Type":"ContainerStarted","Data":"b281a703e0ab83300f931769ce36116878302d15bc68c8e3729bf336f0d738b6"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.778882 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" event={"ID":"0092259f-7233-4db1-9ed5-667deb592e96","Type":"ContainerStarted","Data":"e56acc139d669cbfab9673217e43d1daa5df68bab2067992cb30f935f636aa48"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.778912 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" event={"ID":"0092259f-7233-4db1-9ed5-667deb592e96","Type":"ContainerStarted","Data":"a02c8a80fe83f08150e5d7c9fb5370ebf8c36a71c3dafbaefeab250ef618ac6a"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.784965 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" event={"ID":"d4e2cc58-fcbb-4e4c-84ec-762d66c13313","Type":"ContainerStarted","Data":"b9a18a6da12f7c4bb5193b94fc12848ed3d83b8d72230cf8a98ea5478e5202f0"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.786666 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qnqlr" event={"ID":"ed96c1d9-55f9-48df-970b-2b1e71a90633","Type":"ContainerStarted","Data":"73784ccfff0950e8f15bad3bcf6f947969eb466dacab45b815dc342fa7a88e4d"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.794379 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" event={"ID":"86d28008-71fe-4476-80ef-02d4086307b6","Type":"ContainerStarted","Data":"fa20dbae190437228f54d684426fdf0be1028e25b72b18f349fd15775b1220bf"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.796259 5049 generic.go:334] "Generic (PLEG): container finished" podID="c83d0f19-d930-4144-9fbc-c581fe082422" containerID="695d9440e3ff2a66a257e2b487e11ed50cd4f5a468af17837ff5cbcd87233a2e" exitCode=0 Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.796334 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" event={"ID":"c83d0f19-d930-4144-9fbc-c581fe082422","Type":"ContainerDied","Data":"695d9440e3ff2a66a257e2b487e11ed50cd4f5a468af17837ff5cbcd87233a2e"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.796366 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" event={"ID":"c83d0f19-d930-4144-9fbc-c581fe082422","Type":"ContainerStarted","Data":"68f149f3be0ff3e884b2d4d0377cb6d29ea9ca92fbbe4e12dd9d0e779027ff51"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.797590 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kjclh" event={"ID":"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28","Type":"ContainerStarted","Data":"cc100613b1b579706b2366806b66302d1fa95d96314ff5c12e61d0bfd018aa39"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.797707 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kjclh" event={"ID":"ae3ce8f3-8ec9-4430-bd93-8ae068f1af28","Type":"ContainerStarted","Data":"2d9ed3c9255641f107569bb913b5dccd6c05747a9d11ea6b89b31d8c8090efba"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.798689 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-msgbv" event={"ID":"b96780a0-72a6-4ee0-ae94-60221d4f0a58","Type":"ContainerStarted","Data":"0396d4172a142a20aae6270f06fb234bb6514bc322eee3069863e660b2d5e39b"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.800729 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-msgbv" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.800904 5049 patch_prober.go:28] interesting pod/downloads-7954f5f757-msgbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.800945 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-msgbv" podUID="b96780a0-72a6-4ee0-ae94-60221d4f0a58" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.802196 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" event={"ID":"098c2b85-fe69-4df5-9ec3-43a6f25316c0","Type":"ContainerStarted","Data":"9eb778d2abe5d04c9bea5d3a4289421e55da614fc924933b45c50d9255d866ea"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.803800 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" event={"ID":"7de8021c-2a6e-43d0-bd02-b297e5583c52","Type":"ContainerStarted","Data":"3ff634b810e6d99112ab9decfeabb059528de1216787b2ce4d827df1f804f797"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.808705 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" event={"ID":"d232bc4e-f92d-4b11-bab8-f271f05ebba9","Type":"ContainerStarted","Data":"d6431f84d92876fa136cfa0298f3d5bf4af1d791de5e85056927d81d0fd711ca"} Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.819288 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.819429 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.319394864 +0000 UTC m=+155.418368413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.819585 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.820569 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.32055166 +0000 UTC m=+155.419525209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.877515 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-msgbv" podStartSLOduration=132.877496513 podStartE2EDuration="2m12.877496513s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:39.876872273 +0000 UTC m=+154.975845822" watchObservedRunningTime="2026-01-27 16:59:39.877496513 +0000 UTC m=+154.976470062" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.910504 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:39 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:39 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:39 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.910578 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.924749 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vl66d" podStartSLOduration=132.924721916 podStartE2EDuration="2m12.924721916s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:39.923287557 +0000 UTC m=+155.022261106" watchObservedRunningTime="2026-01-27 16:59:39.924721916 +0000 UTC m=+155.023695465" Jan 27 16:59:39 crc kubenswrapper[5049]: I0127 16:59:39.927483 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:39 crc kubenswrapper[5049]: E0127 16:59:39.929436 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.429399892 +0000 UTC m=+155.528373441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.007584 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-c9dvb" podStartSLOduration=133.007373072 podStartE2EDuration="2m13.007373072s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:39.964767502 +0000 UTC m=+155.063741061" watchObservedRunningTime="2026-01-27 16:59:40.007373072 +0000 UTC m=+155.106346641" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.010036 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.020926 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.031656 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.032144 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.532125528 +0000 UTC m=+155.631099077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.077870 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-q5xl9" podStartSLOduration=133.077833818 podStartE2EDuration="2m13.077833818s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.077340764 +0000 UTC m=+155.176314313" watchObservedRunningTime="2026-01-27 16:59:40.077833818 +0000 UTC m=+155.176807367" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.085065 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-6h54c" podStartSLOduration=133.085020445 podStartE2EDuration="2m13.085020445s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.050810732 +0000 UTC m=+155.149784281" watchObservedRunningTime="2026-01-27 16:59:40.085020445 +0000 UTC m=+155.183993994" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.120999 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pbgwv" podStartSLOduration=134.120966623 podStartE2EDuration="2m14.120966623s" podCreationTimestamp="2026-01-27 16:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.117088996 +0000 UTC m=+155.216062545" watchObservedRunningTime="2026-01-27 16:59:40.120966623 +0000 UTC m=+155.219940172" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.133354 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.133738 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.63372156 +0000 UTC m=+155.732695099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.157732 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h8z99" podStartSLOduration=133.15771452 podStartE2EDuration="2m13.15771452s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.156808096 +0000 UTC m=+155.255781645" watchObservedRunningTime="2026-01-27 16:59:40.15771452 +0000 UTC m=+155.256688069" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.236026 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.236971 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.73695145 +0000 UTC m=+155.835924999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.270708 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kjclh" podStartSLOduration=5.27066534 podStartE2EDuration="5.27066534s" podCreationTimestamp="2026-01-27 16:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.262453703 +0000 UTC m=+155.361427252" watchObservedRunningTime="2026-01-27 16:59:40.27066534 +0000 UTC m=+155.369638889" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.282288 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.324401 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-59dkc" podStartSLOduration=133.324382867 podStartE2EDuration="2m13.324382867s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.321792502 +0000 UTC m=+155.420766061" watchObservedRunningTime="2026-01-27 16:59:40.324382867 +0000 UTC m=+155.423356416" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.341831 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.342224 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.842208689 +0000 UTC m=+155.941182238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.443872 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.444306 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:40.944288434 +0000 UTC m=+156.043261983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.544871 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.545459 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.045438604 +0000 UTC m=+156.144412153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.546758 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.586508 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-b444j"] Jan 27 16:59:40 crc kubenswrapper[5049]: W0127 16:59:40.586638 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33c5f582_79d8_4ba1_8806_1104540ed6eb.slice/crio-311f631c785fd09216d7f0a32b93905da4bf1e2bf89a7efae01eb5571ae23294 WatchSource:0}: Error finding container 311f631c785fd09216d7f0a32b93905da4bf1e2bf89a7efae01eb5571ae23294: Status 404 returned error can't find the container with id 311f631c785fd09216d7f0a32b93905da4bf1e2bf89a7efae01eb5571ae23294 Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.594568 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5pfl"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.601807 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689"] Jan 27 16:59:40 crc kubenswrapper[5049]: W0127 16:59:40.604553 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod770cf38a_9f1f_441a_bf23_4944bd750e24.slice/crio-64fbd6802f32d098453d3581b6ce3ce853d4acb32ecb55cc13b57988725613d0 WatchSource:0}: Error finding container 64fbd6802f32d098453d3581b6ce3ce853d4acb32ecb55cc13b57988725613d0: Status 404 returned error can't find the container with id 64fbd6802f32d098453d3581b6ce3ce853d4acb32ecb55cc13b57988725613d0 Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.633271 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.634639 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.646750 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.647231 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.147213714 +0000 UTC m=+156.246187263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.659619 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.692910 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v9tsv"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.714603 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7vfz"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.717458 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.723807 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-j7hkh"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.723920 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.740998 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.747867 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.749612 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.249584973 +0000 UTC m=+156.348558522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.751117 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.762159 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pjs7m"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.767197 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-76ltz"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.771947 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq"] Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.773719 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:40 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:40 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:40 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.773796 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.775425 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89"] Jan 27 16:59:40 crc kubenswrapper[5049]: W0127 16:59:40.802558 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd894ce44_2759_40c6_9d2c_f26fa1691f0d.slice/crio-73cbad8ddad28188ac043eb10a51ac0626c1457ccfd7d781ddd392bddfb4c070 WatchSource:0}: Error finding container 73cbad8ddad28188ac043eb10a51ac0626c1457ccfd7d781ddd392bddfb4c070: Status 404 returned error can't find the container with id 73cbad8ddad28188ac043eb10a51ac0626c1457ccfd7d781ddd392bddfb4c070 Jan 27 16:59:40 crc kubenswrapper[5049]: W0127 16:59:40.804348 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d508920_c710_4060_b99f_12594f7c1fb4.slice/crio-8599f3ad00eb1551117f548190c6779d77ec88c994af9c03cdc9b1e13cb59e0f WatchSource:0}: Error finding container 8599f3ad00eb1551117f548190c6779d77ec88c994af9c03cdc9b1e13cb59e0f: Status 404 returned error can't find the container with id 8599f3ad00eb1551117f548190c6779d77ec88c994af9c03cdc9b1e13cb59e0f Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.834938 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" event={"ID":"5d508920-c710-4060-b99f-12594f7c1fb4","Type":"ContainerStarted","Data":"8599f3ad00eb1551117f548190c6779d77ec88c994af9c03cdc9b1e13cb59e0f"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.845936 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" event={"ID":"ba18b997-5143-40c5-9309-120e553e337a","Type":"ContainerStarted","Data":"df796957638e051b5596d97a855f562e0334ddacc8d661d4417b633f02ea8d22"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.849949 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.850540 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.350523093 +0000 UTC m=+156.449496642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: W0127 16:59:40.853464 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb22f850b_fb2b_4839_8b41_bb2a92059a5c.slice/crio-698a6b01cb881e9e0bf61067c49262464db8a4219cc8aca853cf1c51d69ff9f3 WatchSource:0}: Error finding container 698a6b01cb881e9e0bf61067c49262464db8a4219cc8aca853cf1c51d69ff9f3: Status 404 returned error can't find the container with id 698a6b01cb881e9e0bf61067c49262464db8a4219cc8aca853cf1c51d69ff9f3 Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.863982 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qnqlr" event={"ID":"ed96c1d9-55f9-48df-970b-2b1e71a90633","Type":"ContainerStarted","Data":"01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.886388 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" event={"ID":"7aeb400e-8352-4de5-baf4-e64073f57d32","Type":"ContainerStarted","Data":"785601291bc736b26c7e78d0048cd50a5f76aa0b32b796e922d68d08ae936bd5"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.897693 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qnqlr" podStartSLOduration=133.897662772 podStartE2EDuration="2m13.897662772s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.896641203 +0000 UTC m=+155.995614742" watchObservedRunningTime="2026-01-27 16:59:40.897662772 +0000 UTC m=+155.996636321" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.908984 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" event={"ID":"7de8021c-2a6e-43d0-bd02-b297e5583c52","Type":"ContainerStarted","Data":"d0b23b315b5826eec5df67dde277529ffe1b663f2760d00e1116bf5c4bec22eb"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.914276 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" event={"ID":"86d28008-71fe-4476-80ef-02d4086307b6","Type":"ContainerStarted","Data":"1e2baa34817f146bc3f2d6c06712950958e194670eb5795db4f354aecf433796"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.927974 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" event={"ID":"c83d0f19-d930-4144-9fbc-c581fe082422","Type":"ContainerStarted","Data":"c4ea8b108a3cf7a808fb92222a6a148db53f65731c74df2d5fcd81473eb27c5b"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.928477 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.935470 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-sx4ts" podStartSLOduration=133.935445958 podStartE2EDuration="2m13.935445958s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.933367778 +0000 UTC m=+156.032341327" watchObservedRunningTime="2026-01-27 16:59:40.935445958 +0000 UTC m=+156.034419507" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.955329 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:40 crc kubenswrapper[5049]: E0127 16:59:40.956391 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.45637004 +0000 UTC m=+156.555343589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.961040 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" event={"ID":"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46","Type":"ContainerStarted","Data":"414d5052d8b738508e7c7900b232c1204f517ee0aab6b05c25a408b33b7b1e4b"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.961139 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" event={"ID":"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46","Type":"ContainerStarted","Data":"85801791efc0afbdefbae7e082d1cfef55e098214f7ad717c35eae8b2dcbb31f"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.965060 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" event={"ID":"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195","Type":"ContainerStarted","Data":"f54c100733f3d6e83f83c83049ec7eb8c9233447f6e645a6677f525100991aac"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.965124 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" event={"ID":"3d1d195b-c8b9-4e9e-a47b-b0187cdd6195","Type":"ContainerStarted","Data":"65f0d8fcd99b5b4da4409427a44fef7a9aa0b96657c7eb7dc84133b7be9cdf93"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.966239 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b444j" event={"ID":"770cf38a-9f1f-441a-bf23-4944bd750e24","Type":"ContainerStarted","Data":"64fbd6802f32d098453d3581b6ce3ce853d4acb32ecb55cc13b57988725613d0"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.969373 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" event={"ID":"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c","Type":"ContainerStarted","Data":"445f5790c0850bb3324c43b542a92005722f2d26195c24b57cb853da6366015f"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.969433 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" event={"ID":"02b9d72e-e939-4cd7-9c2d-17fae6c25c4c","Type":"ContainerStarted","Data":"1404122fe1e0907211060905e7c1758ac88f272be99f483582b0aa56701a3fc7"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.979504 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" podStartSLOduration=133.979477707 podStartE2EDuration="2m13.979477707s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.974652034 +0000 UTC m=+156.073625583" watchObservedRunningTime="2026-01-27 16:59:40.979477707 +0000 UTC m=+156.078451256" Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.987487 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" event={"ID":"ef242174-8534-44ce-bc43-cb6648c594c4","Type":"ContainerStarted","Data":"080b73c323dc69548f8ba76f1d70c1bb8ef42796fc79b77cfea627406711d673"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.987530 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" event={"ID":"ef242174-8534-44ce-bc43-cb6648c594c4","Type":"ContainerStarted","Data":"635fac9cc45a51cc69f13e530c83a0da4faab1413dce5dbe98b7d828073d791a"} Jan 27 16:59:40 crc kubenswrapper[5049]: I0127 16:59:40.987543 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" event={"ID":"ef242174-8534-44ce-bc43-cb6648c594c4","Type":"ContainerStarted","Data":"42f5272abd3abdab5d3a22206b7eaf2fb68c68e7edddcb87529e04dc10975f12"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:40.999828 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" event={"ID":"d894ce44-2759-40c6-9d2c-f26fa1691f0d","Type":"ContainerStarted","Data":"73cbad8ddad28188ac043eb10a51ac0626c1457ccfd7d781ddd392bddfb4c070"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.018615 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" event={"ID":"34cc919b-d826-444d-9748-e3e6704d03cb","Type":"ContainerStarted","Data":"6579ccb512cb6ac5528f9dd3b00a7260e14034c3f8a07fd9b894327e4227182b"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.018664 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" event={"ID":"34cc919b-d826-444d-9748-e3e6704d03cb","Type":"ContainerStarted","Data":"e1038061a783dda4277c6a6616aa2b21a028a1e74fa7031da9b5348a5ad93a72"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.027294 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-frn4s" podStartSLOduration=134.027278618 podStartE2EDuration="2m14.027278618s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:40.991819364 +0000 UTC m=+156.090792903" watchObservedRunningTime="2026-01-27 16:59:41.027278618 +0000 UTC m=+156.126252167" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.028739 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cm47v" podStartSLOduration=135.028730098 podStartE2EDuration="2m15.028730098s" podCreationTimestamp="2026-01-27 16:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:41.025827748 +0000 UTC m=+156.124801317" watchObservedRunningTime="2026-01-27 16:59:41.028730098 +0000 UTC m=+156.127703647" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.037706 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" event={"ID":"9987ba69-abc5-4d37-84aa-a708e31c1586","Type":"ContainerStarted","Data":"f5160eed707cca0124923c8bda21e08140828f7614efec6abb76cdcba64b5229"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.049928 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" event={"ID":"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7","Type":"ContainerStarted","Data":"85e448af839b58b5edce29d248955dac57d08c00ab1f9c2f7f238a146be614eb"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.056530 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" event={"ID":"33c5f582-79d8-4ba1-8806-1104540ed6eb","Type":"ContainerStarted","Data":"311f631c785fd09216d7f0a32b93905da4bf1e2bf89a7efae01eb5571ae23294"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.058768 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.060942 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.560915994 +0000 UTC m=+156.659889713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.069720 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" podStartSLOduration=134.069701499 podStartE2EDuration="2m14.069701499s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:41.048341516 +0000 UTC m=+156.147315065" watchObservedRunningTime="2026-01-27 16:59:41.069701499 +0000 UTC m=+156.168675058" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.071224 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-9vk5m" podStartSLOduration=134.071219092 podStartE2EDuration="2m14.071219092s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:41.070717508 +0000 UTC m=+156.169691067" watchObservedRunningTime="2026-01-27 16:59:41.071219092 +0000 UTC m=+156.170192641" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.083173 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-76ltz" event={"ID":"a1209cc6-7d3a-4431-80ed-878ad81fbd3d","Type":"ContainerStarted","Data":"bda78f2c142f736e121b17d80f708231e68c4ec5911266bc8639fb724d4c161f"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.096308 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" event={"ID":"80c97914-9a11-474e-a4f1-14dde70837cd","Type":"ContainerStarted","Data":"66981a2312ffbd5c3517f3ec2ce182aace3912ebe1451eb6b41f74d99cef4abc"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.098219 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9sm29" podStartSLOduration=135.098188596 podStartE2EDuration="2m15.098188596s" podCreationTimestamp="2026-01-27 16:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:41.097243331 +0000 UTC m=+156.196216880" watchObservedRunningTime="2026-01-27 16:59:41.098188596 +0000 UTC m=+156.197162145" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.100705 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" event={"ID":"098c2b85-fe69-4df5-9ec3-43a6f25316c0","Type":"ContainerStarted","Data":"37ce4de3770e3ee89b2ab00a60585ec4f6f2a1a86bee31f7c522e25a8da0c136"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.106568 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" event={"ID":"7e762e88-c00a-49b7-8a84-48c7fe50b602","Type":"ContainerStarted","Data":"a9981277bfbec9aa20881abd6ef5a268987f673773727be3ea777e0cd7e5306b"} Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.109440 5049 patch_prober.go:28] interesting pod/downloads-7954f5f757-msgbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.109507 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-msgbv" podUID="b96780a0-72a6-4ee0-ae94-60221d4f0a58" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.114789 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w44rt" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.127119 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" podStartSLOduration=134.127100004 podStartE2EDuration="2m14.127100004s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:41.126758928 +0000 UTC m=+156.225732477" watchObservedRunningTime="2026-01-27 16:59:41.127100004 +0000 UTC m=+156.226073553" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.159660 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.161124 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.661100168 +0000 UTC m=+156.760073717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.197918 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-cpnxt" podStartSLOduration=134.197903727 podStartE2EDuration="2m14.197903727s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:41.157443831 +0000 UTC m=+156.256417380" watchObservedRunningTime="2026-01-27 16:59:41.197903727 +0000 UTC m=+156.296877276" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.261800 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.262312 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.76228758 +0000 UTC m=+156.861261129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.362533 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.362948 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.862932915 +0000 UTC m=+156.961906464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.463949 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.464922 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:41.964904485 +0000 UTC m=+157.063878034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.565948 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.566127 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.066098026 +0000 UTC m=+157.165071575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.566228 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.566591 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.06658416 +0000 UTC m=+157.165557699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.669008 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.670101 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.170052392 +0000 UTC m=+157.269025941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.769860 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:41 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:41 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:41 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.770322 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.772251 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.772760 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.272662332 +0000 UTC m=+157.371635871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.873150 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.873453 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.373422614 +0000 UTC m=+157.472396153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.873507 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.874015 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.374004602 +0000 UTC m=+157.472978151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.975085 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.975313 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.475279088 +0000 UTC m=+157.574252647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:41 crc kubenswrapper[5049]: I0127 16:59:41.975398 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:41 crc kubenswrapper[5049]: E0127 16:59:41.976054 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.476032364 +0000 UTC m=+157.575006103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.077319 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.078000 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.577984193 +0000 UTC m=+157.676957742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.133442 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" event={"ID":"ba18b997-5143-40c5-9309-120e553e337a","Type":"ContainerStarted","Data":"af69e0386db572698564cfd71cca13c8c9f78c7000932ff8f91b309b3cb2bd09"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.139035 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" event={"ID":"ed69d5ac-f3d2-42f7-a923-265ad3aad708","Type":"ContainerStarted","Data":"017323f22d807ae5ecd3a81f4e9dc306dbd2597a113b131de7a23c4348bc65ed"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.139103 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" event={"ID":"ed69d5ac-f3d2-42f7-a923-265ad3aad708","Type":"ContainerStarted","Data":"fcb3fe5ffae0a435f4d6ed61bb0f089f244d7bf72548262ba372a891b414a28f"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.145001 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pjs7m" event={"ID":"b22f850b-fb2b-4839-8b41-bb2a92059a5c","Type":"ContainerStarted","Data":"27496b8dbfe31c62b93a0dfc6217d77e1f275b71b86492383e9d1775a661dbf0"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.145035 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pjs7m" event={"ID":"b22f850b-fb2b-4839-8b41-bb2a92059a5c","Type":"ContainerStarted","Data":"698a6b01cb881e9e0bf61067c49262464db8a4219cc8aca853cf1c51d69ff9f3"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.151859 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" event={"ID":"3a79954c-d75b-4e08-b5f3-ffb5783d8ac7","Type":"ContainerStarted","Data":"13ad72faea54f0ecdc8f77fd6865e1b12c25ad50890a63fd37ac53a83fcc428e"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.157731 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gmn44" podStartSLOduration=135.157710187 podStartE2EDuration="2m15.157710187s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.155444498 +0000 UTC m=+157.254418047" watchObservedRunningTime="2026-01-27 16:59:42.157710187 +0000 UTC m=+157.256683736" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.166646 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" event={"ID":"80c97914-9a11-474e-a4f1-14dde70837cd","Type":"ContainerStarted","Data":"77f2a27f9c3018f86d74936acf8df8d829f97f4410ff675fb795c4f07921814b"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.167097 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.171630 5049 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hj689 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.171691 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" podUID="80c97914-9a11-474e-a4f1-14dde70837cd" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.179845 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.182625 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.682612671 +0000 UTC m=+157.781586220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.193924 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-j7hkh" podStartSLOduration=135.193881166 podStartE2EDuration="2m15.193881166s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.189701503 +0000 UTC m=+157.288675053" watchObservedRunningTime="2026-01-27 16:59:42.193881166 +0000 UTC m=+157.292854705" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.196092 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" event={"ID":"cd191306-613f-4c2f-9b3e-e38146dd4400","Type":"ContainerStarted","Data":"f7ff59b1e089235f71d1e9650fb391bb072f068f30a531672e9f8650f1b771ba"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.196140 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" event={"ID":"cd191306-613f-4c2f-9b3e-e38146dd4400","Type":"ContainerStarted","Data":"5664a3fbbe0e3a4f10c3b8a362f3f3a7023fcb82affec16bfe590273001ce1d9"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.196152 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" event={"ID":"cd191306-613f-4c2f-9b3e-e38146dd4400","Type":"ContainerStarted","Data":"6a5f3ac70aa15ff3598e3043f1d6b02e7944fe32e55ea01e5d687d2acabfdf1e"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.196841 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.212661 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8p7nq" podStartSLOduration=135.212642013 podStartE2EDuration="2m15.212642013s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.211332779 +0000 UTC m=+157.310306328" watchObservedRunningTime="2026-01-27 16:59:42.212642013 +0000 UTC m=+157.311615562" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.219435 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2xkjk" event={"ID":"c7bd4bb4-bdf5-4d0f-956d-11f6516f2e46","Type":"ContainerStarted","Data":"d08d2421d22bd37287df8e91989afabca9d1c175b876b7d32ddbc6778ba55e07"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.239290 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" event={"ID":"d894ce44-2759-40c6-9d2c-f26fa1691f0d","Type":"ContainerStarted","Data":"1f7b3a50bcd557041d32e6889b23638d95cc924b442c9ca6d9233f11f9215956"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.240181 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" podStartSLOduration=135.240158073 podStartE2EDuration="2m15.240158073s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.23802663 +0000 UTC m=+157.337000179" watchObservedRunningTime="2026-01-27 16:59:42.240158073 +0000 UTC m=+157.339131612" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.250887 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" event={"ID":"7aeb400e-8352-4de5-baf4-e64073f57d32","Type":"ContainerStarted","Data":"9d7a8a58384555f6e1888850d427ed817d9c32bbafb01f166a0bb5570c8c5d0a"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.257372 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" event={"ID":"9987ba69-abc5-4d37-84aa-a708e31c1586","Type":"ContainerStarted","Data":"571e378e31fbfc15ab0823f4f9bfb1ca2211a979612090f3a659066581051520"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.258026 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.261392 5049 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-c5pfl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.261453 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" podUID="9987ba69-abc5-4d37-84aa-a708e31c1586" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.270331 5049 csr.go:261] certificate signing request csr-8mkvb is approved, waiting to be issued Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.271659 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" podStartSLOduration=135.271626644 podStartE2EDuration="2m15.271626644s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.269983684 +0000 UTC m=+157.368957233" watchObservedRunningTime="2026-01-27 16:59:42.271626644 +0000 UTC m=+157.370600193" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.275023 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" event={"ID":"5c8277b4-0db2-4e62-aee2-4009a3afda61","Type":"ContainerStarted","Data":"1462380c5d6fa531599c8e0efe76e48f7b035abea2b62be599b36cd99d5a1cee"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.275067 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" event={"ID":"5c8277b4-0db2-4e62-aee2-4009a3afda61","Type":"ContainerStarted","Data":"84c05d918a17b3c81b1437a84a252817ac8ba0a97beb2a6f3cd4b00bc5241b42"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.276121 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.278319 5049 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-sx7x4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.278369 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" podUID="5c8277b4-0db2-4e62-aee2-4009a3afda61" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.283355 5049 csr.go:257] certificate signing request csr-8mkvb is issued Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.283488 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.285113 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.785096465 +0000 UTC m=+157.884070014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.294835 5049 generic.go:334] "Generic (PLEG): container finished" podID="770cf38a-9f1f-441a-bf23-4944bd750e24" containerID="dec136c90a5bfa0591f02e6f09fdc1e871531b2222c665bb39407bcf80f5fe2f" exitCode=0 Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.294993 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b444j" event={"ID":"770cf38a-9f1f-441a-bf23-4944bd750e24","Type":"ContainerDied","Data":"dec136c90a5bfa0591f02e6f09fdc1e871531b2222c665bb39407bcf80f5fe2f"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.296307 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" podStartSLOduration=135.296276686 podStartE2EDuration="2m15.296276686s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.295099019 +0000 UTC m=+157.394072568" watchObservedRunningTime="2026-01-27 16:59:42.296276686 +0000 UTC m=+157.395250235" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.320239 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-kc8qf" event={"ID":"33c5f582-79d8-4ba1-8806-1104540ed6eb","Type":"ContainerStarted","Data":"b8efdff36f05f6116481263a2c5529c1ee1987f621821dc1fc529e7bbf2de627"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.329380 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jdb9j" podStartSLOduration=135.329361975 podStartE2EDuration="2m15.329361975s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.328351016 +0000 UTC m=+157.427324575" watchObservedRunningTime="2026-01-27 16:59:42.329361975 +0000 UTC m=+157.428335524" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.329648 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-76ltz" event={"ID":"a1209cc6-7d3a-4431-80ed-878ad81fbd3d","Type":"ContainerStarted","Data":"015e6f99a112a86a250c6f7c0f0465177815fc995d440abfdf66b7fa0b920407"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.345495 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" event={"ID":"7e762e88-c00a-49b7-8a84-48c7fe50b602","Type":"ContainerStarted","Data":"ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.346149 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.348011 5049 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-q7vfz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.348052 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.368865 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" event={"ID":"5d508920-c710-4060-b99f-12594f7c1fb4","Type":"ContainerStarted","Data":"02d14529b2b6407991312ac322a6eb1e4da790e5a46f5bc9edee5a042b52623f"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.369846 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.381245 5049 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s2n89 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.381310 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" podUID="5d508920-c710-4060-b99f-12594f7c1fb4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.387817 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.394704 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-76ltz" podStartSLOduration=7.394689633 podStartE2EDuration="7.394689633s" podCreationTimestamp="2026-01-27 16:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.394360147 +0000 UTC m=+157.493333696" watchObservedRunningTime="2026-01-27 16:59:42.394689633 +0000 UTC m=+157.493663182" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.400884 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" event={"ID":"ed398f74-73d3-4e3b-a7b8-57d283e9adfa","Type":"ContainerStarted","Data":"0163145f3be92ac46947ed8aff415ef96166debbc8d59a923870b166d4a90c58"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.400989 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" event={"ID":"ed398f74-73d3-4e3b-a7b8-57d283e9adfa","Type":"ContainerStarted","Data":"6b2546ded458e05e657ee56981efa1f7b63bef80a0a1f9b02549384b3faa51d9"} Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.403292 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" podStartSLOduration=136.403273968 podStartE2EDuration="2m16.403273968s" podCreationTimestamp="2026-01-27 16:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.365140135 +0000 UTC m=+157.464113684" watchObservedRunningTime="2026-01-27 16:59:42.403273968 +0000 UTC m=+157.502247517" Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.406172 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.906147487 +0000 UTC m=+158.005121216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.452267 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" podStartSLOduration=135.452252706 podStartE2EDuration="2m15.452252706s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.451877198 +0000 UTC m=+157.550850747" watchObservedRunningTime="2026-01-27 16:59:42.452252706 +0000 UTC m=+157.551226255" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.486693 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" podStartSLOduration=135.486661629 podStartE2EDuration="2m15.486661629s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.480106963 +0000 UTC m=+157.579080512" watchObservedRunningTime="2026-01-27 16:59:42.486661629 +0000 UTC m=+157.585635179" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.489230 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.491560 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:42.991542005 +0000 UTC m=+158.090515544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.521503 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" podStartSLOduration=135.521487703 podStartE2EDuration="2m15.521487703s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.519596892 +0000 UTC m=+157.618570441" watchObservedRunningTime="2026-01-27 16:59:42.521487703 +0000 UTC m=+157.620461252" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.551550 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" podStartSLOduration=135.551536856 podStartE2EDuration="2m15.551536856s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:42.549117329 +0000 UTC m=+157.648090878" watchObservedRunningTime="2026-01-27 16:59:42.551536856 +0000 UTC m=+157.650510405" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.592492 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.592816 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.092803291 +0000 UTC m=+158.191776840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.699704 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.701428 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.201404211 +0000 UTC m=+158.300377760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.772269 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:42 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:42 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:42 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.772731 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.810355 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.810687 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.310659823 +0000 UTC m=+158.409633372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.911610 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.911895 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.411849845 +0000 UTC m=+158.510823394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:42 crc kubenswrapper[5049]: I0127 16:59:42.911944 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:42 crc kubenswrapper[5049]: E0127 16:59:42.912400 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.412392301 +0000 UTC m=+158.511365850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.013080 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.013340 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.513299659 +0000 UTC m=+158.612273208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.013424 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.013918 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.513905328 +0000 UTC m=+158.612879067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.114253 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.114461 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.614432768 +0000 UTC m=+158.713406317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.114601 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.115095 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.615070969 +0000 UTC m=+158.714044518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.215505 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.215714 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.715662602 +0000 UTC m=+158.814636151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.215792 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.216066 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.716054691 +0000 UTC m=+158.815028240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.284973 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 16:54:42 +0000 UTC, rotation deadline is 2026-12-11 20:03:42.724981955 +0000 UTC Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.285315 5049 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7635h3m59.439671156s for next certificate rotation Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.317863 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.318104 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.818061112 +0000 UTC m=+158.917034651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.318431 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.318864 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.81884795 +0000 UTC m=+158.917821499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.420137 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.420347 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:43.920330346 +0000 UTC m=+159.019303895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.427425 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" event={"ID":"7de8021c-2a6e-43d0-bd02-b297e5583c52","Type":"ContainerStarted","Data":"150348133b17dbef9dc902173691e7d1302d287788cf65a5f9c98ef1e5fb4dd6"} Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.427500 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" event={"ID":"7de8021c-2a6e-43d0-bd02-b297e5583c52","Type":"ContainerStarted","Data":"6ccac763ac65f97aaab55f967fb758903c315892a633e4742ef522253aaf82a4"} Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.433110 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v9tsv" event={"ID":"ed398f74-73d3-4e3b-a7b8-57d283e9adfa","Type":"ContainerStarted","Data":"f25e1208ace1415605bae6df7ded2237472bf678415aa08a6fea4ef500f74d9c"} Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.456002 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pjs7m" event={"ID":"b22f850b-fb2b-4839-8b41-bb2a92059a5c","Type":"ContainerStarted","Data":"b7622bd0fbbca26768bcea0385f2ec3143bf100766cdf65f3f18f8471607397d"} Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.456910 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.469949 5049 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-q7vfz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.470001 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.470051 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b444j" event={"ID":"770cf38a-9f1f-441a-bf23-4944bd750e24","Type":"ContainerStarted","Data":"ff5f4cbf8ca4717fa79d28577208e1744062c7d79326d4ba9ff789681680fad6"} Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.470133 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-b444j" event={"ID":"770cf38a-9f1f-441a-bf23-4944bd750e24","Type":"ContainerStarted","Data":"a0ffeb61405f3ddda2dd52da2f6fb35ad2ac62adb986f5ee0ea9f84dc986f766"} Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.493486 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-sx7x4" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.500394 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.500569 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6jbnp" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.521743 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hj689" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.525113 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.527899 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:44.027885496 +0000 UTC m=+159.126859045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.529261 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pjs7m" podStartSLOduration=8.529242871 podStartE2EDuration="8.529242871s" podCreationTimestamp="2026-01-27 16:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:43.527749509 +0000 UTC m=+158.626723058" watchObservedRunningTime="2026-01-27 16:59:43.529242871 +0000 UTC m=+158.628216420" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.625996 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.626317 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:44.126302184 +0000 UTC m=+159.225275733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.727458 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.729774 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 16:59:44.229757675 +0000 UTC m=+159.328731224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ttw4x" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.752984 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-b444j" podStartSLOduration=137.752948286 podStartE2EDuration="2m17.752948286s" podCreationTimestamp="2026-01-27 16:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:43.744562631 +0000 UTC m=+158.843536180" watchObservedRunningTime="2026-01-27 16:59:43.752948286 +0000 UTC m=+158.851921835" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.767957 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s2n89" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.770203 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:43 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:43 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:43 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.770310 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.778059 5049 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.811409 5049 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T16:59:43.778096482Z","Handler":null,"Name":""} Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.828952 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:43 crc kubenswrapper[5049]: E0127 16:59:43.829494 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 16:59:44.329471406 +0000 UTC m=+159.428444955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.838412 5049 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.838458 5049 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.930293 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.945694 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 16:59:43 crc kubenswrapper[5049]: I0127 16:59:43.945751 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.037504 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ttw4x\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.075978 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.135665 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.173774 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.479823 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" event={"ID":"7de8021c-2a6e-43d0-bd02-b297e5583c52","Type":"ContainerStarted","Data":"5c518ffea50dcf8dd4cd3b9e0fa547aa6e21b358c865955181d9eba55d427670"} Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.498593 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttw4x"] Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.500824 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.501761 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.502049 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.508045 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.513197 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.534220 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-4gw2c" podStartSLOduration=9.534190025000001 podStartE2EDuration="9.534190025s" podCreationTimestamp="2026-01-27 16:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:44.534124982 +0000 UTC m=+159.633098531" watchObservedRunningTime="2026-01-27 16:59:44.534190025 +0000 UTC m=+159.633163574" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.538288 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.543419 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/817adc52-f4a0-475b-8d7a-fcac0750e185-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"817adc52-f4a0-475b-8d7a-fcac0750e185\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.543568 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/817adc52-f4a0-475b-8d7a-fcac0750e185-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"817adc52-f4a0-475b-8d7a-fcac0750e185\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.644601 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/817adc52-f4a0-475b-8d7a-fcac0750e185-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"817adc52-f4a0-475b-8d7a-fcac0750e185\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.644658 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/817adc52-f4a0-475b-8d7a-fcac0750e185-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"817adc52-f4a0-475b-8d7a-fcac0750e185\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.644726 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/817adc52-f4a0-475b-8d7a-fcac0750e185-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"817adc52-f4a0-475b-8d7a-fcac0750e185\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.752780 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/817adc52-f4a0-475b-8d7a-fcac0750e185-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"817adc52-f4a0-475b-8d7a-fcac0750e185\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.769581 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:44 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:44 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:44 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.769705 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.846492 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.981590 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6g667"] Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.983157 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:44 crc kubenswrapper[5049]: I0127 16:59:44.985634 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.076419 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6g667"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.150810 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-utilities\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.151597 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5gg\" (UniqueName: \"kubernetes.io/projected/50c0ecb4-7212-4c52-ba39-4fb298404899-kube-api-access-pn5gg\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.151637 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-catalog-content\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.179257 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bwn2r"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.180393 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.182356 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.223171 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwn2r"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.253716 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5gg\" (UniqueName: \"kubernetes.io/projected/50c0ecb4-7212-4c52-ba39-4fb298404899-kube-api-access-pn5gg\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.253889 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-catalog-content\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.253944 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-utilities\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.254934 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-utilities\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.254961 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-catalog-content\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.283845 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5gg\" (UniqueName: \"kubernetes.io/projected/50c0ecb4-7212-4c52-ba39-4fb298404899-kube-api-access-pn5gg\") pod \"certified-operators-6g667\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.322168 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6g667" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.355006 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-catalog-content\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.355131 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzw5\" (UniqueName: \"kubernetes.io/projected/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-kube-api-access-rpzw5\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.355153 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-utilities\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.391543 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8rcrq"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.401503 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.443982 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rcrq"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.459439 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpzw5\" (UniqueName: \"kubernetes.io/projected/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-kube-api-access-rpzw5\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.459496 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-utilities\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.459542 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-catalog-content\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.460092 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-catalog-content\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.461964 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-utilities\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.505836 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpzw5\" (UniqueName: \"kubernetes.io/projected/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-kube-api-access-rpzw5\") pod \"community-operators-bwn2r\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.513319 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" event={"ID":"96e75cde-66e8-4ab2-b715-3b07b34bc3a1","Type":"ContainerStarted","Data":"c028e7ef21355a533b551c7c94cc7e6dbf79efbe2cb2407698bea42f24902281"} Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.513394 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" event={"ID":"96e75cde-66e8-4ab2-b715-3b07b34bc3a1","Type":"ContainerStarted","Data":"e26d4ff67846f7719631f05d03fb24fe14534f0d6bec3d1238c001e817fa4bde"} Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.514091 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.514552 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.550216 5049 generic.go:334] "Generic (PLEG): container finished" podID="7aeb400e-8352-4de5-baf4-e64073f57d32" containerID="9d7a8a58384555f6e1888850d427ed817d9c32bbafb01f166a0bb5570c8c5d0a" exitCode=0 Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.550741 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" event={"ID":"7aeb400e-8352-4de5-baf4-e64073f57d32","Type":"ContainerDied","Data":"9d7a8a58384555f6e1888850d427ed817d9c32bbafb01f166a0bb5570c8c5d0a"} Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.564191 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g468n\" (UniqueName: \"kubernetes.io/projected/c0d9fed4-edc5-4f5e-9962-91a6382fb569-kube-api-access-g468n\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.564237 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-catalog-content\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.564278 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-utilities\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.597499 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jkf2s"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.597632 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" podStartSLOduration=138.597622274 podStartE2EDuration="2m18.597622274s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:45.586512857 +0000 UTC m=+160.685486406" watchObservedRunningTime="2026-01-27 16:59:45.597622274 +0000 UTC m=+160.696595823" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.599201 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.628151 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkf2s"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.667813 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g468n\" (UniqueName: \"kubernetes.io/projected/c0d9fed4-edc5-4f5e-9962-91a6382fb569-kube-api-access-g468n\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.667949 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-catalog-content\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.668026 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-utilities\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.669888 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-catalog-content\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.670469 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-utilities\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.676883 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.711944 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g468n\" (UniqueName: \"kubernetes.io/projected/c0d9fed4-edc5-4f5e-9962-91a6382fb569-kube-api-access-g468n\") pod \"certified-operators-8rcrq\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.741294 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.770357 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:45 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:45 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:45 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.770406 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.774338 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-utilities\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.774438 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-catalog-content\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.774475 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t9m2\" (UniqueName: \"kubernetes.io/projected/7bada626-6ad8-4fad-8649-0b9f3497e68e-kube-api-access-4t9m2\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.802053 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwn2r" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.864115 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6g667"] Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.877356 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-utilities\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.877923 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-catalog-content\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.877956 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t9m2\" (UniqueName: \"kubernetes.io/projected/7bada626-6ad8-4fad-8649-0b9f3497e68e-kube-api-access-4t9m2\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.878471 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-utilities\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.878921 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-catalog-content\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.915581 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t9m2\" (UniqueName: \"kubernetes.io/projected/7bada626-6ad8-4fad-8649-0b9f3497e68e-kube-api-access-4t9m2\") pod \"community-operators-jkf2s\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:45 crc kubenswrapper[5049]: I0127 16:59:45.931976 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkf2s" Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.220021 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bwn2r"] Jan 27 16:59:46 crc kubenswrapper[5049]: W0127 16:59:46.272769 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6841cc70_80cd_499f_a8e6_e2a9031dcbf0.slice/crio-693d099465d796041e99179e9e80ae0b38e168c8caedd0e10121d492864387f1 WatchSource:0}: Error finding container 693d099465d796041e99179e9e80ae0b38e168c8caedd0e10121d492864387f1: Status 404 returned error can't find the container with id 693d099465d796041e99179e9e80ae0b38e168c8caedd0e10121d492864387f1 Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.377734 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rcrq"] Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.458082 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkf2s"] Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.582137 5049 generic.go:334] "Generic (PLEG): container finished" podID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerID="dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9" exitCode=0 Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.582225 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g667" event={"ID":"50c0ecb4-7212-4c52-ba39-4fb298404899","Type":"ContainerDied","Data":"dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9"} Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.582259 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g667" event={"ID":"50c0ecb4-7212-4c52-ba39-4fb298404899","Type":"ContainerStarted","Data":"3bbce7d122156226a8ed5bf093fd5cb54039606b643f7045caaa53c4a4cab464"} Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.585874 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.587184 5049 generic.go:334] "Generic (PLEG): container finished" podID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerID="b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9" exitCode=0 Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.587285 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwn2r" event={"ID":"6841cc70-80cd-499f-a8e6-e2a9031dcbf0","Type":"ContainerDied","Data":"b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9"} Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.587327 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwn2r" event={"ID":"6841cc70-80cd-499f-a8e6-e2a9031dcbf0","Type":"ContainerStarted","Data":"693d099465d796041e99179e9e80ae0b38e168c8caedd0e10121d492864387f1"} Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.594232 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkf2s" event={"ID":"7bada626-6ad8-4fad-8649-0b9f3497e68e","Type":"ContainerStarted","Data":"3f1f482e415240fffa9bcd3d9eb5dd1675103b0d59a6619e3b275a944fbdb9cb"} Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.598422 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"817adc52-f4a0-475b-8d7a-fcac0750e185","Type":"ContainerStarted","Data":"0d6f2dbc193d0e6b8ecf1b0e8705855a3146ef07ebbe0189d0594f0fe318362e"} Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.598487 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"817adc52-f4a0-475b-8d7a-fcac0750e185","Type":"ContainerStarted","Data":"75486f5209dd689028957b1097dbf85fd93e951c33ad12a0a6672b37a99a61ab"} Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.604391 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcrq" event={"ID":"c0d9fed4-edc5-4f5e-9962-91a6382fb569","Type":"ContainerStarted","Data":"21a204716a6d5c123d2c9861c508a8e9e3ccb872e4279e2917541bb931cf2d5f"} Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.770925 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:46 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:46 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:46 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.771033 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.813081 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.901661 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aeb400e-8352-4de5-baf4-e64073f57d32-secret-volume\") pod \"7aeb400e-8352-4de5-baf4-e64073f57d32\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.901887 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aeb400e-8352-4de5-baf4-e64073f57d32-config-volume\") pod \"7aeb400e-8352-4de5-baf4-e64073f57d32\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.902021 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6gp4\" (UniqueName: \"kubernetes.io/projected/7aeb400e-8352-4de5-baf4-e64073f57d32-kube-api-access-x6gp4\") pod \"7aeb400e-8352-4de5-baf4-e64073f57d32\" (UID: \"7aeb400e-8352-4de5-baf4-e64073f57d32\") " Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.902971 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aeb400e-8352-4de5-baf4-e64073f57d32-config-volume" (OuterVolumeSpecName: "config-volume") pod "7aeb400e-8352-4de5-baf4-e64073f57d32" (UID: "7aeb400e-8352-4de5-baf4-e64073f57d32"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.910703 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aeb400e-8352-4de5-baf4-e64073f57d32-kube-api-access-x6gp4" (OuterVolumeSpecName: "kube-api-access-x6gp4") pod "7aeb400e-8352-4de5-baf4-e64073f57d32" (UID: "7aeb400e-8352-4de5-baf4-e64073f57d32"). InnerVolumeSpecName "kube-api-access-x6gp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:59:46 crc kubenswrapper[5049]: I0127 16:59:46.911358 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aeb400e-8352-4de5-baf4-e64073f57d32-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7aeb400e-8352-4de5-baf4-e64073f57d32" (UID: "7aeb400e-8352-4de5-baf4-e64073f57d32"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.003483 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aeb400e-8352-4de5-baf4-e64073f57d32-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.003528 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aeb400e-8352-4de5-baf4-e64073f57d32-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.003549 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6gp4\" (UniqueName: \"kubernetes.io/projected/7aeb400e-8352-4de5-baf4-e64073f57d32-kube-api-access-x6gp4\") on node \"crc\" DevicePath \"\"" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.182802 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjtk"] Jan 27 16:59:47 crc kubenswrapper[5049]: E0127 16:59:47.183237 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aeb400e-8352-4de5-baf4-e64073f57d32" containerName="collect-profiles" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.183280 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aeb400e-8352-4de5-baf4-e64073f57d32" containerName="collect-profiles" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.183547 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aeb400e-8352-4de5-baf4-e64073f57d32" containerName="collect-profiles" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.185372 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.187157 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjtk"] Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.190906 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.308591 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-catalog-content\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.308740 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdsx\" (UniqueName: \"kubernetes.io/projected/c45b66c7-0a92-456f-927a-fe596ffdedb3-kube-api-access-lbdsx\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.308770 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-utilities\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.409980 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbdsx\" (UniqueName: \"kubernetes.io/projected/c45b66c7-0a92-456f-927a-fe596ffdedb3-kube-api-access-lbdsx\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.410030 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-utilities\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.410082 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-catalog-content\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.410662 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-catalog-content\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.411431 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-utilities\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.432856 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbdsx\" (UniqueName: \"kubernetes.io/projected/c45b66c7-0a92-456f-927a-fe596ffdedb3-kube-api-access-lbdsx\") pod \"redhat-marketplace-xdjtk\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.503582 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.567739 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5w8k7"] Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.568736 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.587784 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5w8k7"] Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.657908 5049 generic.go:334] "Generic (PLEG): container finished" podID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerID="918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f" exitCode=0 Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.658895 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcrq" event={"ID":"c0d9fed4-edc5-4f5e-9962-91a6382fb569","Type":"ContainerDied","Data":"918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f"} Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.667179 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" event={"ID":"7aeb400e-8352-4de5-baf4-e64073f57d32","Type":"ContainerDied","Data":"785601291bc736b26c7e78d0048cd50a5f76aa0b32b796e922d68d08ae936bd5"} Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.667229 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785601291bc736b26c7e78d0048cd50a5f76aa0b32b796e922d68d08ae936bd5" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.667303 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.685718 5049 generic.go:334] "Generic (PLEG): container finished" podID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerID="e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3" exitCode=0 Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.685787 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkf2s" event={"ID":"7bada626-6ad8-4fad-8649-0b9f3497e68e","Type":"ContainerDied","Data":"e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3"} Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.716137 5049 generic.go:334] "Generic (PLEG): container finished" podID="817adc52-f4a0-475b-8d7a-fcac0750e185" containerID="0d6f2dbc193d0e6b8ecf1b0e8705855a3146ef07ebbe0189d0594f0fe318362e" exitCode=0 Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.716219 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"817adc52-f4a0-475b-8d7a-fcac0750e185","Type":"ContainerDied","Data":"0d6f2dbc193d0e6b8ecf1b0e8705855a3146ef07ebbe0189d0594f0fe318362e"} Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.722069 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.722569 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.726397 5049 patch_prober.go:28] interesting pod/console-f9d7485db-qnqlr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.726477 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qnqlr" podUID="ed96c1d9-55f9-48df-970b-2b1e71a90633" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.737711 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km6xd\" (UniqueName: \"kubernetes.io/projected/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-kube-api-access-km6xd\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.737793 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-catalog-content\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.737829 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-utilities\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.738329 5049 patch_prober.go:28] interesting pod/downloads-7954f5f757-msgbv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.738395 5049 patch_prober.go:28] interesting pod/downloads-7954f5f757-msgbv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.738482 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-msgbv" podUID="b96780a0-72a6-4ee0-ae94-60221d4f0a58" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.738940 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-msgbv" podUID="b96780a0-72a6-4ee0-ae94-60221d4f0a58" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.772755 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.776362 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:47 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:47 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:47 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.776440 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.781276 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.781337 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.838858 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km6xd\" (UniqueName: \"kubernetes.io/projected/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-kube-api-access-km6xd\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.838918 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-catalog-content\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.838938 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-utilities\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.839395 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-utilities\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.839477 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-catalog-content\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.862170 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km6xd\" (UniqueName: \"kubernetes.io/projected/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-kube-api-access-km6xd\") pod \"redhat-marketplace-5w8k7\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:47 crc kubenswrapper[5049]: I0127 16:59:47.953026 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.006182 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.102506 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjtk"] Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.143408 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/817adc52-f4a0-475b-8d7a-fcac0750e185-kubelet-dir\") pod \"817adc52-f4a0-475b-8d7a-fcac0750e185\" (UID: \"817adc52-f4a0-475b-8d7a-fcac0750e185\") " Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.143471 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/817adc52-f4a0-475b-8d7a-fcac0750e185-kube-api-access\") pod \"817adc52-f4a0-475b-8d7a-fcac0750e185\" (UID: \"817adc52-f4a0-475b-8d7a-fcac0750e185\") " Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.144394 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817adc52-f4a0-475b-8d7a-fcac0750e185-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "817adc52-f4a0-475b-8d7a-fcac0750e185" (UID: "817adc52-f4a0-475b-8d7a-fcac0750e185"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.144565 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.144651 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.153289 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817adc52-f4a0-475b-8d7a-fcac0750e185-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "817adc52-f4a0-475b-8d7a-fcac0750e185" (UID: "817adc52-f4a0-475b-8d7a-fcac0750e185"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.161307 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.179284 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-58k4t"] Jan 27 16:59:48 crc kubenswrapper[5049]: E0127 16:59:48.180190 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="817adc52-f4a0-475b-8d7a-fcac0750e185" containerName="pruner" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.180209 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="817adc52-f4a0-475b-8d7a-fcac0750e185" containerName="pruner" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.180332 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="817adc52-f4a0-475b-8d7a-fcac0750e185" containerName="pruner" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.181384 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.190925 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.196745 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58k4t"] Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.245161 5049 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/817adc52-f4a0-475b-8d7a-fcac0750e185-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.245188 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/817adc52-f4a0-475b-8d7a-fcac0750e185-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.325761 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5w8k7"] Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.347641 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4792s\" (UniqueName: \"kubernetes.io/projected/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-kube-api-access-4792s\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.347733 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-utilities\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.347769 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-catalog-content\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.448932 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4792s\" (UniqueName: \"kubernetes.io/projected/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-kube-api-access-4792s\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.449000 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-utilities\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.449036 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-catalog-content\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.449984 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-utilities\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.450291 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-catalog-content\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.471974 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4792s\" (UniqueName: \"kubernetes.io/projected/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-kube-api-access-4792s\") pod \"redhat-operators-58k4t\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.544734 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.571980 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqrvs"] Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.573514 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.579395 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqrvs"] Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.764391 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.764849 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-catalog-content\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.764891 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqvp\" (UniqueName: \"kubernetes.io/projected/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-kube-api-access-xhqvp\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.764922 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-utilities\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.764415 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"817adc52-f4a0-475b-8d7a-fcac0750e185","Type":"ContainerDied","Data":"75486f5209dd689028957b1097dbf85fd93e951c33ad12a0a6672b37a99a61ab"} Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.765006 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75486f5209dd689028957b1097dbf85fd93e951c33ad12a0a6672b37a99a61ab" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.767968 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:48 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:48 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:48 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.768007 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.771514 5049 generic.go:334] "Generic (PLEG): container finished" podID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerID="d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22" exitCode=0 Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.771579 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjtk" event={"ID":"c45b66c7-0a92-456f-927a-fe596ffdedb3","Type":"ContainerDied","Data":"d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22"} Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.771610 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjtk" event={"ID":"c45b66c7-0a92-456f-927a-fe596ffdedb3","Type":"ContainerStarted","Data":"e990fc4e48187a7f468facda08734c89c4315769ac171e5a29cdfc977e1dc89b"} Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.780816 5049 generic.go:334] "Generic (PLEG): container finished" podID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerID="d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c" exitCode=0 Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.780908 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w8k7" event={"ID":"11537c3a-0298-48bc-a5f8-b79fe47c9cd5","Type":"ContainerDied","Data":"d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c"} Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.780969 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w8k7" event={"ID":"11537c3a-0298-48bc-a5f8-b79fe47c9cd5","Type":"ContainerStarted","Data":"9096b80c1c29528319b9d83bc9cc4900beeee9c51804ad63f65d5792342b981e"} Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.788953 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-b444j" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.866713 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhqvp\" (UniqueName: \"kubernetes.io/projected/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-kube-api-access-xhqvp\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.866780 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-utilities\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.866859 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-catalog-content\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.867312 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-catalog-content\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.867771 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-utilities\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.904546 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58k4t"] Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.910391 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhqvp\" (UniqueName: \"kubernetes.io/projected/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-kube-api-access-xhqvp\") pod \"redhat-operators-dqrvs\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:48 crc kubenswrapper[5049]: W0127 16:59:48.938229 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb9802f7_71e8_4b07_a308_7a0fe06aa4b1.slice/crio-51c4563b5a817f68cf78a14714acf2b1c4f981c3965a2df4fdc05bc4dd3df2eb WatchSource:0}: Error finding container 51c4563b5a817f68cf78a14714acf2b1c4f981c3965a2df4fdc05bc4dd3df2eb: Status 404 returned error can't find the container with id 51c4563b5a817f68cf78a14714acf2b1c4f981c3965a2df4fdc05bc4dd3df2eb Jan 27 16:59:48 crc kubenswrapper[5049]: I0127 16:59:48.951234 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.063594 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.596327 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqrvs"] Jan 27 16:59:49 crc kubenswrapper[5049]: W0127 16:59:49.608043 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff41f89_4b87_4d1a_bf71_39ec568e3a0a.slice/crio-7d78dbae317860cf93806b52f1f1b4e79d2308e5ba0add4f19489db7438baa6e WatchSource:0}: Error finding container 7d78dbae317860cf93806b52f1f1b4e79d2308e5ba0add4f19489db7438baa6e: Status 404 returned error can't find the container with id 7d78dbae317860cf93806b52f1f1b4e79d2308e5ba0add4f19489db7438baa6e Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.768392 5049 patch_prober.go:28] interesting pod/router-default-5444994796-q5xl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 16:59:49 crc kubenswrapper[5049]: [-]has-synced failed: reason withheld Jan 27 16:59:49 crc kubenswrapper[5049]: [+]process-running ok Jan 27 16:59:49 crc kubenswrapper[5049]: healthz check failed Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.768456 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-q5xl9" podUID="1d175bca-3f73-4ad1-be29-f724a6baee2c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.791519 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.807390 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqrvs" event={"ID":"bff41f89-4b87-4d1a-bf71-39ec568e3a0a","Type":"ContainerStarted","Data":"7d78dbae317860cf93806b52f1f1b4e79d2308e5ba0add4f19489db7438baa6e"} Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.811397 5049 generic.go:334] "Generic (PLEG): container finished" podID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerID="a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228" exitCode=0 Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.813345 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58k4t" event={"ID":"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1","Type":"ContainerDied","Data":"a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228"} Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.813375 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58k4t" event={"ID":"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1","Type":"ContainerStarted","Data":"51c4563b5a817f68cf78a14714acf2b1c4f981c3965a2df4fdc05bc4dd3df2eb"} Jan 27 16:59:49 crc kubenswrapper[5049]: I0127 16:59:49.822900 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d48a67e1-cecf-41d6-a42c-52bdcd3ab892-metrics-certs\") pod \"network-metrics-daemon-lv4sx\" (UID: \"d48a67e1-cecf-41d6-a42c-52bdcd3ab892\") " pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.023379 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lv4sx" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.558519 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.559984 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.562874 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.564497 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.577155 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.607729 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6158c04-2faf-466c-8662-b59a5177b778-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c6158c04-2faf-466c-8662-b59a5177b778\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.607909 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6158c04-2faf-466c-8662-b59a5177b778-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c6158c04-2faf-466c-8662-b59a5177b778\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.646429 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lv4sx"] Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.709353 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6158c04-2faf-466c-8662-b59a5177b778-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c6158c04-2faf-466c-8662-b59a5177b778\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.709501 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6158c04-2faf-466c-8662-b59a5177b778-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c6158c04-2faf-466c-8662-b59a5177b778\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.709602 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6158c04-2faf-466c-8662-b59a5177b778-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c6158c04-2faf-466c-8662-b59a5177b778\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.729873 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6158c04-2faf-466c-8662-b59a5177b778-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c6158c04-2faf-466c-8662-b59a5177b778\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.773944 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.778433 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-q5xl9" Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.878269 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" event={"ID":"d48a67e1-cecf-41d6-a42c-52bdcd3ab892","Type":"ContainerStarted","Data":"b2801d0713eac234c3a292dc1df7a90802b78e156eda33fe5ef315395592d493"} Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.884511 5049 generic.go:334] "Generic (PLEG): container finished" podID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerID="3eda3d6abde26392c8061c9e1d27e32bf31ded14cfcef2aa495f130ec31cfcaa" exitCode=0 Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.884944 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqrvs" event={"ID":"bff41f89-4b87-4d1a-bf71-39ec568e3a0a","Type":"ContainerDied","Data":"3eda3d6abde26392c8061c9e1d27e32bf31ded14cfcef2aa495f130ec31cfcaa"} Jan 27 16:59:50 crc kubenswrapper[5049]: I0127 16:59:50.895348 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 16:59:51 crc kubenswrapper[5049]: I0127 16:59:51.367820 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 16:59:51 crc kubenswrapper[5049]: I0127 16:59:51.899507 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c6158c04-2faf-466c-8662-b59a5177b778","Type":"ContainerStarted","Data":"e8db89c238fafe244e3a1cf32a9a00aa77a21750a16b85d148f4fefbf1776f63"} Jan 27 16:59:51 crc kubenswrapper[5049]: I0127 16:59:51.913426 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" event={"ID":"d48a67e1-cecf-41d6-a42c-52bdcd3ab892","Type":"ContainerStarted","Data":"a692edd8e6c8d2af280c3452822c62686887220001fac9f95918c7d0ba5adeee"} Jan 27 16:59:52 crc kubenswrapper[5049]: I0127 16:59:52.964398 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lv4sx" event={"ID":"d48a67e1-cecf-41d6-a42c-52bdcd3ab892","Type":"ContainerStarted","Data":"1dba1abb5cc0673f4e7a23dd9b1e04b17575c2e402cf78af87f98deb9de1bf7e"} Jan 27 16:59:52 crc kubenswrapper[5049]: I0127 16:59:52.969465 5049 generic.go:334] "Generic (PLEG): container finished" podID="c6158c04-2faf-466c-8662-b59a5177b778" containerID="a6ff0606b8e889d35cbec089186fbf95c7a4cad25f5ce0027ab2b95894793a69" exitCode=0 Jan 27 16:59:52 crc kubenswrapper[5049]: I0127 16:59:52.969508 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c6158c04-2faf-466c-8662-b59a5177b778","Type":"ContainerDied","Data":"a6ff0606b8e889d35cbec089186fbf95c7a4cad25f5ce0027ab2b95894793a69"} Jan 27 16:59:53 crc kubenswrapper[5049]: I0127 16:59:53.008970 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lv4sx" podStartSLOduration=146.008952076 podStartE2EDuration="2m26.008952076s" podCreationTimestamp="2026-01-27 16:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:59:52.987829984 +0000 UTC m=+168.086803533" watchObservedRunningTime="2026-01-27 16:59:53.008952076 +0000 UTC m=+168.107925625" Jan 27 16:59:53 crc kubenswrapper[5049]: I0127 16:59:53.228412 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pjs7m" Jan 27 16:59:57 crc kubenswrapper[5049]: I0127 16:59:57.747232 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-msgbv" Jan 27 16:59:57 crc kubenswrapper[5049]: I0127 16:59:57.754443 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:57 crc kubenswrapper[5049]: I0127 16:59:57.760296 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 16:59:59 crc kubenswrapper[5049]: I0127 16:59:59.149717 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5pfl"] Jan 27 16:59:59 crc kubenswrapper[5049]: I0127 16:59:59.150097 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" podUID="9987ba69-abc5-4d37-84aa-a708e31c1586" containerName="controller-manager" containerID="cri-o://571e378e31fbfc15ab0823f4f9bfb1ca2211a979612090f3a659066581051520" gracePeriod=30 Jan 27 16:59:59 crc kubenswrapper[5049]: I0127 16:59:59.164226 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5"] Jan 27 16:59:59 crc kubenswrapper[5049]: I0127 16:59:59.164534 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" podUID="e9b333e1-888e-4515-b954-c8cbfd4af83a" containerName="route-controller-manager" containerID="cri-o://1b27b0755746526a903bc9f4e883e62d17ecd19fef04878e0c576252a556c590" gracePeriod=30 Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.052872 5049 generic.go:334] "Generic (PLEG): container finished" podID="9987ba69-abc5-4d37-84aa-a708e31c1586" containerID="571e378e31fbfc15ab0823f4f9bfb1ca2211a979612090f3a659066581051520" exitCode=0 Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.052974 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" event={"ID":"9987ba69-abc5-4d37-84aa-a708e31c1586","Type":"ContainerDied","Data":"571e378e31fbfc15ab0823f4f9bfb1ca2211a979612090f3a659066581051520"} Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.057083 5049 generic.go:334] "Generic (PLEG): container finished" podID="e9b333e1-888e-4515-b954-c8cbfd4af83a" containerID="1b27b0755746526a903bc9f4e883e62d17ecd19fef04878e0c576252a556c590" exitCode=0 Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.057122 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" event={"ID":"e9b333e1-888e-4515-b954-c8cbfd4af83a","Type":"ContainerDied","Data":"1b27b0755746526a903bc9f4e883e62d17ecd19fef04878e0c576252a556c590"} Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.146702 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm"] Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.148221 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.150425 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.151645 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm"] Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.152177 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.188778 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xm9l\" (UniqueName: \"kubernetes.io/projected/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-kube-api-access-5xm9l\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.188869 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-secret-volume\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.188928 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-config-volume\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.289884 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-secret-volume\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.290019 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-config-volume\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.290095 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xm9l\" (UniqueName: \"kubernetes.io/projected/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-kube-api-access-5xm9l\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.291204 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-config-volume\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.315071 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-secret-volume\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.323701 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xm9l\" (UniqueName: \"kubernetes.io/projected/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-kube-api-access-5xm9l\") pod \"collect-profiles-29492220-d55cm\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:00 crc kubenswrapper[5049]: I0127 17:00:00.474828 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:01 crc kubenswrapper[5049]: I0127 17:00:01.898830 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 17:00:01 crc kubenswrapper[5049]: I0127 17:00:01.918560 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6158c04-2faf-466c-8662-b59a5177b778-kubelet-dir\") pod \"c6158c04-2faf-466c-8662-b59a5177b778\" (UID: \"c6158c04-2faf-466c-8662-b59a5177b778\") " Jan 27 17:00:01 crc kubenswrapper[5049]: I0127 17:00:01.918647 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6158c04-2faf-466c-8662-b59a5177b778-kube-api-access\") pod \"c6158c04-2faf-466c-8662-b59a5177b778\" (UID: \"c6158c04-2faf-466c-8662-b59a5177b778\") " Jan 27 17:00:01 crc kubenswrapper[5049]: I0127 17:00:01.918647 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6158c04-2faf-466c-8662-b59a5177b778-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c6158c04-2faf-466c-8662-b59a5177b778" (UID: "c6158c04-2faf-466c-8662-b59a5177b778"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:00:01 crc kubenswrapper[5049]: I0127 17:00:01.918820 5049 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6158c04-2faf-466c-8662-b59a5177b778-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:01 crc kubenswrapper[5049]: I0127 17:00:01.924157 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6158c04-2faf-466c-8662-b59a5177b778-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c6158c04-2faf-466c-8662-b59a5177b778" (UID: "c6158c04-2faf-466c-8662-b59a5177b778"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:02 crc kubenswrapper[5049]: I0127 17:00:02.032739 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6158c04-2faf-466c-8662-b59a5177b778-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:02 crc kubenswrapper[5049]: I0127 17:00:02.081810 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c6158c04-2faf-466c-8662-b59a5177b778","Type":"ContainerDied","Data":"e8db89c238fafe244e3a1cf32a9a00aa77a21750a16b85d148f4fefbf1776f63"} Jan 27 17:00:02 crc kubenswrapper[5049]: I0127 17:00:02.081855 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8db89c238fafe244e3a1cf32a9a00aa77a21750a16b85d148f4fefbf1776f63" Jan 27 17:00:02 crc kubenswrapper[5049]: I0127 17:00:02.081887 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 17:00:04 crc kubenswrapper[5049]: I0127 17:00:04.090100 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 17:00:06 crc kubenswrapper[5049]: I0127 17:00:06.718105 5049 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-z8wm5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:00:06 crc kubenswrapper[5049]: I0127 17:00:06.718799 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" podUID="e9b333e1-888e-4515-b954-c8cbfd4af83a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:00:09 crc kubenswrapper[5049]: I0127 17:00:09.084155 5049 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-c5pfl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 17:00:09 crc kubenswrapper[5049]: I0127 17:00:09.084718 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" podUID="9987ba69-abc5-4d37-84aa-a708e31c1586" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.033191 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.048151 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.070587 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk"] Jan 27 17:00:11 crc kubenswrapper[5049]: E0127 17:00:11.071722 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9987ba69-abc5-4d37-84aa-a708e31c1586" containerName="controller-manager" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.071751 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9987ba69-abc5-4d37-84aa-a708e31c1586" containerName="controller-manager" Jan 27 17:00:11 crc kubenswrapper[5049]: E0127 17:00:11.071772 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6158c04-2faf-466c-8662-b59a5177b778" containerName="pruner" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.071782 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6158c04-2faf-466c-8662-b59a5177b778" containerName="pruner" Jan 27 17:00:11 crc kubenswrapper[5049]: E0127 17:00:11.071801 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b333e1-888e-4515-b954-c8cbfd4af83a" containerName="route-controller-manager" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.071811 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b333e1-888e-4515-b954-c8cbfd4af83a" containerName="route-controller-manager" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.071979 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b333e1-888e-4515-b954-c8cbfd4af83a" containerName="route-controller-manager" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.072000 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6158c04-2faf-466c-8662-b59a5177b778" containerName="pruner" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.072017 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9987ba69-abc5-4d37-84aa-a708e31c1586" containerName="controller-manager" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.072578 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.121491 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk"] Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.141861 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.141880 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5" event={"ID":"e9b333e1-888e-4515-b954-c8cbfd4af83a","Type":"ContainerDied","Data":"8dbd6ae9438580a32b8385d6948bbbdc1b418899c77df56e0e054241b2b4e262"} Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.141955 5049 scope.go:117] "RemoveContainer" containerID="1b27b0755746526a903bc9f4e883e62d17ecd19fef04878e0c576252a556c590" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.147371 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" event={"ID":"9987ba69-abc5-4d37-84aa-a708e31c1586","Type":"ContainerDied","Data":"f5160eed707cca0124923c8bda21e08140828f7614efec6abb76cdcba64b5229"} Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.147436 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c5pfl" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.176643 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsd6t\" (UniqueName: \"kubernetes.io/projected/9987ba69-abc5-4d37-84aa-a708e31c1586-kube-api-access-bsd6t\") pod \"9987ba69-abc5-4d37-84aa-a708e31c1586\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.176805 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-client-ca\") pod \"9987ba69-abc5-4d37-84aa-a708e31c1586\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.176846 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6sjc\" (UniqueName: \"kubernetes.io/projected/e9b333e1-888e-4515-b954-c8cbfd4af83a-kube-api-access-z6sjc\") pod \"e9b333e1-888e-4515-b954-c8cbfd4af83a\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.176879 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-client-ca\") pod \"e9b333e1-888e-4515-b954-c8cbfd4af83a\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.176972 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b333e1-888e-4515-b954-c8cbfd4af83a-serving-cert\") pod \"e9b333e1-888e-4515-b954-c8cbfd4af83a\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.177047 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-config\") pod \"9987ba69-abc5-4d37-84aa-a708e31c1586\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.177086 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-config\") pod \"e9b333e1-888e-4515-b954-c8cbfd4af83a\" (UID: \"e9b333e1-888e-4515-b954-c8cbfd4af83a\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.177135 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-proxy-ca-bundles\") pod \"9987ba69-abc5-4d37-84aa-a708e31c1586\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.177198 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9987ba69-abc5-4d37-84aa-a708e31c1586-serving-cert\") pod \"9987ba69-abc5-4d37-84aa-a708e31c1586\" (UID: \"9987ba69-abc5-4d37-84aa-a708e31c1586\") " Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.177434 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d98f76d-477e-4543-aae6-5015d3084a26-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.177778 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-config\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.177878 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-client-ca\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.177917 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcpr\" (UniqueName: \"kubernetes.io/projected/4d98f76d-477e-4543-aae6-5015d3084a26-kube-api-access-jmcpr\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.178940 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-client-ca" (OuterVolumeSpecName: "client-ca") pod "e9b333e1-888e-4515-b954-c8cbfd4af83a" (UID: "e9b333e1-888e-4515-b954-c8cbfd4af83a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.179544 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-config" (OuterVolumeSpecName: "config") pod "e9b333e1-888e-4515-b954-c8cbfd4af83a" (UID: "e9b333e1-888e-4515-b954-c8cbfd4af83a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.179598 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-client-ca" (OuterVolumeSpecName: "client-ca") pod "9987ba69-abc5-4d37-84aa-a708e31c1586" (UID: "9987ba69-abc5-4d37-84aa-a708e31c1586"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.179611 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9987ba69-abc5-4d37-84aa-a708e31c1586" (UID: "9987ba69-abc5-4d37-84aa-a708e31c1586"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.180182 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-config" (OuterVolumeSpecName: "config") pod "9987ba69-abc5-4d37-84aa-a708e31c1586" (UID: "9987ba69-abc5-4d37-84aa-a708e31c1586"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.184965 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9987ba69-abc5-4d37-84aa-a708e31c1586-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9987ba69-abc5-4d37-84aa-a708e31c1586" (UID: "9987ba69-abc5-4d37-84aa-a708e31c1586"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.185149 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b333e1-888e-4515-b954-c8cbfd4af83a-kube-api-access-z6sjc" (OuterVolumeSpecName: "kube-api-access-z6sjc") pod "e9b333e1-888e-4515-b954-c8cbfd4af83a" (UID: "e9b333e1-888e-4515-b954-c8cbfd4af83a"). InnerVolumeSpecName "kube-api-access-z6sjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.185334 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9987ba69-abc5-4d37-84aa-a708e31c1586-kube-api-access-bsd6t" (OuterVolumeSpecName: "kube-api-access-bsd6t") pod "9987ba69-abc5-4d37-84aa-a708e31c1586" (UID: "9987ba69-abc5-4d37-84aa-a708e31c1586"). InnerVolumeSpecName "kube-api-access-bsd6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.187051 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9b333e1-888e-4515-b954-c8cbfd4af83a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e9b333e1-888e-4515-b954-c8cbfd4af83a" (UID: "e9b333e1-888e-4515-b954-c8cbfd4af83a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.280623 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-client-ca\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.280763 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcpr\" (UniqueName: \"kubernetes.io/projected/4d98f76d-477e-4543-aae6-5015d3084a26-kube-api-access-jmcpr\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.280879 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d98f76d-477e-4543-aae6-5015d3084a26-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281030 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-config\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281142 5049 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281169 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9987ba69-abc5-4d37-84aa-a708e31c1586-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281189 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsd6t\" (UniqueName: \"kubernetes.io/projected/9987ba69-abc5-4d37-84aa-a708e31c1586-kube-api-access-bsd6t\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281216 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281235 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6sjc\" (UniqueName: \"kubernetes.io/projected/e9b333e1-888e-4515-b954-c8cbfd4af83a-kube-api-access-z6sjc\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281258 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281279 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9b333e1-888e-4515-b954-c8cbfd4af83a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281302 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9987ba69-abc5-4d37-84aa-a708e31c1586-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.281321 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9b333e1-888e-4515-b954-c8cbfd4af83a-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.282454 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-client-ca\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.283444 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-config\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.307522 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d98f76d-477e-4543-aae6-5015d3084a26-serving-cert\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.311874 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcpr\" (UniqueName: \"kubernetes.io/projected/4d98f76d-477e-4543-aae6-5015d3084a26-kube-api-access-jmcpr\") pod \"route-controller-manager-58bf9f9fdf-89fmk\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.432241 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.493309 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5"] Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.507361 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-z8wm5"] Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.514095 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5pfl"] Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.517383 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c5pfl"] Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.659045 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9987ba69-abc5-4d37-84aa-a708e31c1586" path="/var/lib/kubelet/pods/9987ba69-abc5-4d37-84aa-a708e31c1586/volumes" Jan 27 17:00:11 crc kubenswrapper[5049]: I0127 17:00:11.660806 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b333e1-888e-4515-b954-c8cbfd4af83a" path="/var/lib/kubelet/pods/e9b333e1-888e-4515-b954-c8cbfd4af83a/volumes" Jan 27 17:00:13 crc kubenswrapper[5049]: I0127 17:00:13.876646 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 17:00:15 crc kubenswrapper[5049]: I0127 17:00:15.991307 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-959fcdd48-8j8lc"] Jan 27 17:00:15 crc kubenswrapper[5049]: I0127 17:00:15.992709 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:15 crc kubenswrapper[5049]: I0127 17:00:15.994986 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 17:00:15 crc kubenswrapper[5049]: I0127 17:00:15.995254 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 17:00:15 crc kubenswrapper[5049]: I0127 17:00:15.995310 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 17:00:15 crc kubenswrapper[5049]: I0127 17:00:15.995706 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 17:00:15 crc kubenswrapper[5049]: I0127 17:00:15.996010 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 17:00:15 crc kubenswrapper[5049]: I0127 17:00:15.996034 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.009008 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-959fcdd48-8j8lc"] Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.012112 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.183050 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86e92e96-4548-40bd-802a-90dc0e9c66b3-serving-cert\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.183425 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-proxy-ca-bundles\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.183552 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knvzt\" (UniqueName: \"kubernetes.io/projected/86e92e96-4548-40bd-802a-90dc0e9c66b3-kube-api-access-knvzt\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.183647 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-config\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.183746 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-client-ca\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.285574 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86e92e96-4548-40bd-802a-90dc0e9c66b3-serving-cert\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.285818 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-proxy-ca-bundles\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.285923 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knvzt\" (UniqueName: \"kubernetes.io/projected/86e92e96-4548-40bd-802a-90dc0e9c66b3-kube-api-access-knvzt\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.285992 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-config\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.286048 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-client-ca\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.288451 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-proxy-ca-bundles\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.288788 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-config\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.289320 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-client-ca\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.300799 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86e92e96-4548-40bd-802a-90dc0e9c66b3-serving-cert\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.319996 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knvzt\" (UniqueName: \"kubernetes.io/projected/86e92e96-4548-40bd-802a-90dc0e9c66b3-kube-api-access-knvzt\") pod \"controller-manager-959fcdd48-8j8lc\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:16 crc kubenswrapper[5049]: E0127 17:00:16.412947 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 17:00:16 crc kubenswrapper[5049]: E0127 17:00:16.413532 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g468n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8rcrq_openshift-marketplace(c0d9fed4-edc5-4f5e-9962-91a6382fb569): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 17:00:16 crc kubenswrapper[5049]: E0127 17:00:16.414816 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8rcrq" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" Jan 27 17:00:16 crc kubenswrapper[5049]: I0127 17:00:16.498332 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:17 crc kubenswrapper[5049]: I0127 17:00:17.781967 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:00:17 crc kubenswrapper[5049]: I0127 17:00:17.782045 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:00:18 crc kubenswrapper[5049]: I0127 17:00:18.186170 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6ckbp" Jan 27 17:00:19 crc kubenswrapper[5049]: I0127 17:00:19.126926 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-959fcdd48-8j8lc"] Jan 27 17:00:19 crc kubenswrapper[5049]: I0127 17:00:19.225411 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk"] Jan 27 17:00:19 crc kubenswrapper[5049]: E0127 17:00:19.497315 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8rcrq" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" Jan 27 17:00:21 crc kubenswrapper[5049]: E0127 17:00:21.836541 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 17:00:21 crc kubenswrapper[5049]: E0127 17:00:21.837438 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbdsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xdjtk_openshift-marketplace(c45b66c7-0a92-456f-927a-fe596ffdedb3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 17:00:21 crc kubenswrapper[5049]: E0127 17:00:21.838636 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xdjtk" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.372260 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xdjtk" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" Jan 27 17:00:23 crc kubenswrapper[5049]: I0127 17:00:23.417283 5049 scope.go:117] "RemoveContainer" containerID="571e378e31fbfc15ab0823f4f9bfb1ca2211a979612090f3a659066581051520" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.471179 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.471317 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rpzw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bwn2r_openshift-marketplace(6841cc70-80cd-499f-a8e6-e2a9031dcbf0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.473024 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bwn2r" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.556968 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.557572 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-km6xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5w8k7_openshift-marketplace(11537c3a-0298-48bc-a5f8-b79fe47c9cd5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.570716 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.570979 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4t9m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jkf2s_openshift-marketplace(7bada626-6ad8-4fad-8649-0b9f3497e68e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.571076 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5w8k7" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.572462 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jkf2s" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.579469 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.579627 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pn5gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6g667_openshift-marketplace(50c0ecb4-7212-4c52-ba39-4fb298404899): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 17:00:23 crc kubenswrapper[5049]: E0127 17:00:23.580836 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6g667" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" Jan 27 17:00:23 crc kubenswrapper[5049]: I0127 17:00:23.828173 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm"] Jan 27 17:00:23 crc kubenswrapper[5049]: W0127 17:00:23.831796 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb72ffc1_c49f_4ad0_bafa_5d6a4b0d86fb.slice/crio-723c2710780e24abc36979c4d6c9e5e9b2f8204677af6cd8489fba544eb91584 WatchSource:0}: Error finding container 723c2710780e24abc36979c4d6c9e5e9b2f8204677af6cd8489fba544eb91584: Status 404 returned error can't find the container with id 723c2710780e24abc36979c4d6c9e5e9b2f8204677af6cd8489fba544eb91584 Jan 27 17:00:23 crc kubenswrapper[5049]: I0127 17:00:23.896123 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-959fcdd48-8j8lc"] Jan 27 17:00:23 crc kubenswrapper[5049]: W0127 17:00:23.910609 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e92e96_4548_40bd_802a_90dc0e9c66b3.slice/crio-7078ddc92103c11ead04669e19607242af1c2108ee2a1edce54ba6e940a574ec WatchSource:0}: Error finding container 7078ddc92103c11ead04669e19607242af1c2108ee2a1edce54ba6e940a574ec: Status 404 returned error can't find the container with id 7078ddc92103c11ead04669e19607242af1c2108ee2a1edce54ba6e940a574ec Jan 27 17:00:23 crc kubenswrapper[5049]: I0127 17:00:23.915666 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk"] Jan 27 17:00:23 crc kubenswrapper[5049]: W0127 17:00:23.923081 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d98f76d_477e_4543_aae6_5015d3084a26.slice/crio-6611eb5a7074b30b3ee41e0feeea8ddd80369dfc0d37d61f3b33262e776c3c35 WatchSource:0}: Error finding container 6611eb5a7074b30b3ee41e0feeea8ddd80369dfc0d37d61f3b33262e776c3c35: Status 404 returned error can't find the container with id 6611eb5a7074b30b3ee41e0feeea8ddd80369dfc0d37d61f3b33262e776c3c35 Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.242472 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqrvs" event={"ID":"bff41f89-4b87-4d1a-bf71-39ec568e3a0a","Type":"ContainerStarted","Data":"8236f24b6e9221e3241c15a24f0ea7c54c371fac11d50415c485d091bac4922f"} Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.245772 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" event={"ID":"86e92e96-4548-40bd-802a-90dc0e9c66b3","Type":"ContainerStarted","Data":"a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81"} Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.245831 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" event={"ID":"86e92e96-4548-40bd-802a-90dc0e9c66b3","Type":"ContainerStarted","Data":"7078ddc92103c11ead04669e19607242af1c2108ee2a1edce54ba6e940a574ec"} Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.245874 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" podUID="86e92e96-4548-40bd-802a-90dc0e9c66b3" containerName="controller-manager" containerID="cri-o://a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81" gracePeriod=30 Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.245962 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.251504 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58k4t" event={"ID":"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1","Type":"ContainerStarted","Data":"26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6"} Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.253490 5049 generic.go:334] "Generic (PLEG): container finished" podID="eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb" containerID="083764d7f785cc193b914b87b0ca158cb847a0424b9e9ebaf94e54ce0439e5ad" exitCode=0 Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.253532 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" event={"ID":"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb","Type":"ContainerDied","Data":"083764d7f785cc193b914b87b0ca158cb847a0424b9e9ebaf94e54ce0439e5ad"} Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.253622 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" event={"ID":"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb","Type":"ContainerStarted","Data":"723c2710780e24abc36979c4d6c9e5e9b2f8204677af6cd8489fba544eb91584"} Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.256259 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.256690 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" event={"ID":"4d98f76d-477e-4543-aae6-5015d3084a26","Type":"ContainerStarted","Data":"b37aa8fa6fe772bd9288a90fe11ea4e99c59b050605858adcdbe3e5c8699c1af"} Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.256725 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" event={"ID":"4d98f76d-477e-4543-aae6-5015d3084a26","Type":"ContainerStarted","Data":"6611eb5a7074b30b3ee41e0feeea8ddd80369dfc0d37d61f3b33262e776c3c35"} Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.256842 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" podUID="4d98f76d-477e-4543-aae6-5015d3084a26" containerName="route-controller-manager" containerID="cri-o://b37aa8fa6fe772bd9288a90fe11ea4e99c59b050605858adcdbe3e5c8699c1af" gracePeriod=30 Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.258186 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:24 crc kubenswrapper[5049]: E0127 17:00:24.258802 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5w8k7" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" Jan 27 17:00:24 crc kubenswrapper[5049]: E0127 17:00:24.258824 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jkf2s" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" Jan 27 17:00:24 crc kubenswrapper[5049]: E0127 17:00:24.259580 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6g667" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" Jan 27 17:00:24 crc kubenswrapper[5049]: E0127 17:00:24.259633 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bwn2r" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.291267 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" podStartSLOduration=25.291245987 podStartE2EDuration="25.291245987s" podCreationTimestamp="2026-01-27 16:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:00:24.287502257 +0000 UTC m=+199.386475806" watchObservedRunningTime="2026-01-27 17:00:24.291245987 +0000 UTC m=+199.390219536" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.348301 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" podStartSLOduration=25.348271195 podStartE2EDuration="25.348271195s" podCreationTimestamp="2026-01-27 16:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:00:24.341432325 +0000 UTC m=+199.440405874" watchObservedRunningTime="2026-01-27 17:00:24.348271195 +0000 UTC m=+199.447244754" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.546573 5049 patch_prober.go:28] interesting pod/route-controller-manager-58bf9f9fdf-89fmk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:50504->10.217.0.55:8443: read: connection reset by peer" start-of-body= Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.547044 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" podUID="4d98f76d-477e-4543-aae6-5015d3084a26" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:50504->10.217.0.55:8443: read: connection reset by peer" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.628142 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.667999 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c64f79955-5h7jf"] Jan 27 17:00:24 crc kubenswrapper[5049]: E0127 17:00:24.668429 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e92e96-4548-40bd-802a-90dc0e9c66b3" containerName="controller-manager" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.668448 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e92e96-4548-40bd-802a-90dc0e9c66b3" containerName="controller-manager" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.668571 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e92e96-4548-40bd-802a-90dc0e9c66b3" containerName="controller-manager" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.669224 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.672300 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c64f79955-5h7jf"] Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.714323 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knvzt\" (UniqueName: \"kubernetes.io/projected/86e92e96-4548-40bd-802a-90dc0e9c66b3-kube-api-access-knvzt\") pod \"86e92e96-4548-40bd-802a-90dc0e9c66b3\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.714432 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-config\") pod \"86e92e96-4548-40bd-802a-90dc0e9c66b3\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.714557 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86e92e96-4548-40bd-802a-90dc0e9c66b3-serving-cert\") pod \"86e92e96-4548-40bd-802a-90dc0e9c66b3\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.714720 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-proxy-ca-bundles\") pod \"86e92e96-4548-40bd-802a-90dc0e9c66b3\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.714748 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-client-ca\") pod \"86e92e96-4548-40bd-802a-90dc0e9c66b3\" (UID: \"86e92e96-4548-40bd-802a-90dc0e9c66b3\") " Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.715148 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-serving-cert\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.715194 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-config\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.715249 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrcj\" (UniqueName: \"kubernetes.io/projected/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-kube-api-access-mfrcj\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.715293 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-proxy-ca-bundles\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.715328 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-client-ca\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.717735 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "86e92e96-4548-40bd-802a-90dc0e9c66b3" (UID: "86e92e96-4548-40bd-802a-90dc0e9c66b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.717781 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-config" (OuterVolumeSpecName: "config") pod "86e92e96-4548-40bd-802a-90dc0e9c66b3" (UID: "86e92e96-4548-40bd-802a-90dc0e9c66b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.717818 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "86e92e96-4548-40bd-802a-90dc0e9c66b3" (UID: "86e92e96-4548-40bd-802a-90dc0e9c66b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.732351 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e92e96-4548-40bd-802a-90dc0e9c66b3-kube-api-access-knvzt" (OuterVolumeSpecName: "kube-api-access-knvzt") pod "86e92e96-4548-40bd-802a-90dc0e9c66b3" (UID: "86e92e96-4548-40bd-802a-90dc0e9c66b3"). InnerVolumeSpecName "kube-api-access-knvzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.732520 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e92e96-4548-40bd-802a-90dc0e9c66b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "86e92e96-4548-40bd-802a-90dc0e9c66b3" (UID: "86e92e96-4548-40bd-802a-90dc0e9c66b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816490 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-config\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816576 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfrcj\" (UniqueName: \"kubernetes.io/projected/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-kube-api-access-mfrcj\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816616 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-proxy-ca-bundles\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816644 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-client-ca\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816689 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-serving-cert\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816728 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knvzt\" (UniqueName: \"kubernetes.io/projected/86e92e96-4548-40bd-802a-90dc0e9c66b3-kube-api-access-knvzt\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816748 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816759 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86e92e96-4548-40bd-802a-90dc0e9c66b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816768 5049 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.816776 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/86e92e96-4548-40bd-802a-90dc0e9c66b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.818064 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-client-ca\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.818564 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-proxy-ca-bundles\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.818988 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-config\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.822560 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-serving-cert\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.836039 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfrcj\" (UniqueName: \"kubernetes.io/projected/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-kube-api-access-mfrcj\") pod \"controller-manager-6c64f79955-5h7jf\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:24 crc kubenswrapper[5049]: I0127 17:00:24.984464 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.267267 5049 generic.go:334] "Generic (PLEG): container finished" podID="86e92e96-4548-40bd-802a-90dc0e9c66b3" containerID="a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81" exitCode=0 Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.267799 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" event={"ID":"86e92e96-4548-40bd-802a-90dc0e9c66b3","Type":"ContainerDied","Data":"a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81"} Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.268501 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" event={"ID":"86e92e96-4548-40bd-802a-90dc0e9c66b3","Type":"ContainerDied","Data":"7078ddc92103c11ead04669e19607242af1c2108ee2a1edce54ba6e940a574ec"} Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.268558 5049 scope.go:117] "RemoveContainer" containerID="a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.267874 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-959fcdd48-8j8lc" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.273648 5049 generic.go:334] "Generic (PLEG): container finished" podID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerID="26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6" exitCode=0 Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.273741 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58k4t" event={"ID":"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1","Type":"ContainerDied","Data":"26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6"} Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.276440 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-58bf9f9fdf-89fmk_4d98f76d-477e-4543-aae6-5015d3084a26/route-controller-manager/0.log" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.276489 5049 generic.go:334] "Generic (PLEG): container finished" podID="4d98f76d-477e-4543-aae6-5015d3084a26" containerID="b37aa8fa6fe772bd9288a90fe11ea4e99c59b050605858adcdbe3e5c8699c1af" exitCode=255 Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.276550 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" event={"ID":"4d98f76d-477e-4543-aae6-5015d3084a26","Type":"ContainerDied","Data":"b37aa8fa6fe772bd9288a90fe11ea4e99c59b050605858adcdbe3e5c8699c1af"} Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.296616 5049 generic.go:334] "Generic (PLEG): container finished" podID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerID="8236f24b6e9221e3241c15a24f0ea7c54c371fac11d50415c485d091bac4922f" exitCode=0 Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.296814 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqrvs" event={"ID":"bff41f89-4b87-4d1a-bf71-39ec568e3a0a","Type":"ContainerDied","Data":"8236f24b6e9221e3241c15a24f0ea7c54c371fac11d50415c485d091bac4922f"} Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.296969 5049 scope.go:117] "RemoveContainer" containerID="a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81" Jan 27 17:00:25 crc kubenswrapper[5049]: E0127 17:00:25.299401 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81\": container with ID starting with a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81 not found: ID does not exist" containerID="a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.299438 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81"} err="failed to get container status \"a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81\": rpc error: code = NotFound desc = could not find container \"a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81\": container with ID starting with a4f78ea4284a4e0b575bdb6b5785462259c1c1a377c9704198e6553c373ffd81 not found: ID does not exist" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.333210 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-959fcdd48-8j8lc"] Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.336559 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-959fcdd48-8j8lc"] Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.351211 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-58bf9f9fdf-89fmk_4d98f76d-477e-4543-aae6-5015d3084a26/route-controller-manager/0.log" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.351579 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.424915 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-client-ca\") pod \"4d98f76d-477e-4543-aae6-5015d3084a26\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.424987 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmcpr\" (UniqueName: \"kubernetes.io/projected/4d98f76d-477e-4543-aae6-5015d3084a26-kube-api-access-jmcpr\") pod \"4d98f76d-477e-4543-aae6-5015d3084a26\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.425061 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-config\") pod \"4d98f76d-477e-4543-aae6-5015d3084a26\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.425080 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d98f76d-477e-4543-aae6-5015d3084a26-serving-cert\") pod \"4d98f76d-477e-4543-aae6-5015d3084a26\" (UID: \"4d98f76d-477e-4543-aae6-5015d3084a26\") " Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.425772 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-client-ca" (OuterVolumeSpecName: "client-ca") pod "4d98f76d-477e-4543-aae6-5015d3084a26" (UID: "4d98f76d-477e-4543-aae6-5015d3084a26"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.426088 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-config" (OuterVolumeSpecName: "config") pod "4d98f76d-477e-4543-aae6-5015d3084a26" (UID: "4d98f76d-477e-4543-aae6-5015d3084a26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.431939 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d98f76d-477e-4543-aae6-5015d3084a26-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4d98f76d-477e-4543-aae6-5015d3084a26" (UID: "4d98f76d-477e-4543-aae6-5015d3084a26"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.432749 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d98f76d-477e-4543-aae6-5015d3084a26-kube-api-access-jmcpr" (OuterVolumeSpecName: "kube-api-access-jmcpr") pod "4d98f76d-477e-4543-aae6-5015d3084a26" (UID: "4d98f76d-477e-4543-aae6-5015d3084a26"). InnerVolumeSpecName "kube-api-access-jmcpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.445210 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c64f79955-5h7jf"] Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.527117 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.527155 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d98f76d-477e-4543-aae6-5015d3084a26-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.527168 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d98f76d-477e-4543-aae6-5015d3084a26-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.527177 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmcpr\" (UniqueName: \"kubernetes.io/projected/4d98f76d-477e-4543-aae6-5015d3084a26-kube-api-access-jmcpr\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.582062 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.627918 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-config-volume\") pod \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.628044 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-secret-volume\") pod \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.628073 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xm9l\" (UniqueName: \"kubernetes.io/projected/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-kube-api-access-5xm9l\") pod \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\" (UID: \"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb\") " Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.629160 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-config-volume" (OuterVolumeSpecName: "config-volume") pod "eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb" (UID: "eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.631082 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-kube-api-access-5xm9l" (OuterVolumeSpecName: "kube-api-access-5xm9l") pod "eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb" (UID: "eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb"). InnerVolumeSpecName "kube-api-access-5xm9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.631227 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb" (UID: "eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.652054 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e92e96-4548-40bd-802a-90dc0e9c66b3" path="/var/lib/kubelet/pods/86e92e96-4548-40bd-802a-90dc0e9c66b3/volumes" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.729700 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.729734 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xm9l\" (UniqueName: \"kubernetes.io/projected/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-kube-api-access-5xm9l\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:25 crc kubenswrapper[5049]: I0127 17:00:25.729746 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.155339 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 17:00:26 crc kubenswrapper[5049]: E0127 17:00:26.155960 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb" containerName="collect-profiles" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.155978 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb" containerName="collect-profiles" Jan 27 17:00:26 crc kubenswrapper[5049]: E0127 17:00:26.155987 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d98f76d-477e-4543-aae6-5015d3084a26" containerName="route-controller-manager" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.155993 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d98f76d-477e-4543-aae6-5015d3084a26" containerName="route-controller-manager" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.156147 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb" containerName="collect-profiles" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.156163 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d98f76d-477e-4543-aae6-5015d3084a26" containerName="route-controller-manager" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.156541 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.158700 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.163614 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.165022 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.239619 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.239715 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.306221 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" event={"ID":"5749b43d-b3a3-41f8-ae0d-f2bbe77299be","Type":"ContainerStarted","Data":"6e16de84135f26b26e810f17e6af49cae1541105f8f170875b60a4cdd08dde83"} Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.306261 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" event={"ID":"5749b43d-b3a3-41f8-ae0d-f2bbe77299be","Type":"ContainerStarted","Data":"00626a3cde5ff39eeb8a2ad6e694d16736f68a3d579f00f84da84aa632129e9c"} Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.306648 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.309818 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58k4t" event={"ID":"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1","Type":"ContainerStarted","Data":"3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde"} Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.311566 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" event={"ID":"eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb","Type":"ContainerDied","Data":"723c2710780e24abc36979c4d6c9e5e9b2f8204677af6cd8489fba544eb91584"} Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.311607 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="723c2710780e24abc36979c4d6c9e5e9b2f8204677af6cd8489fba544eb91584" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.311649 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.313343 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.313380 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-58bf9f9fdf-89fmk_4d98f76d-477e-4543-aae6-5015d3084a26/route-controller-manager/0.log" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.313525 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.313947 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk" event={"ID":"4d98f76d-477e-4543-aae6-5015d3084a26","Type":"ContainerDied","Data":"6611eb5a7074b30b3ee41e0feeea8ddd80369dfc0d37d61f3b33262e776c3c35"} Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.314009 5049 scope.go:117] "RemoveContainer" containerID="b37aa8fa6fe772bd9288a90fe11ea4e99c59b050605858adcdbe3e5c8699c1af" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.318474 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqrvs" event={"ID":"bff41f89-4b87-4d1a-bf71-39ec568e3a0a","Type":"ContainerStarted","Data":"2ff0a8e67f1c9a6d5c5a8b51415acef4dc35db84660327e676edf71d949a7f87"} Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.324473 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" podStartSLOduration=7.324454561 podStartE2EDuration="7.324454561s" podCreationTimestamp="2026-01-27 17:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:00:26.323778171 +0000 UTC m=+201.422751750" watchObservedRunningTime="2026-01-27 17:00:26.324454561 +0000 UTC m=+201.423428150" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.341783 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.341890 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.342087 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.362557 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqrvs" podStartSLOduration=3.298724055 podStartE2EDuration="38.362536895s" podCreationTimestamp="2026-01-27 16:59:48 +0000 UTC" firstStartedPulling="2026-01-27 16:59:50.888043833 +0000 UTC m=+165.987017372" lastFinishedPulling="2026-01-27 17:00:25.951856663 +0000 UTC m=+201.050830212" observedRunningTime="2026-01-27 17:00:26.360051622 +0000 UTC m=+201.459025171" watchObservedRunningTime="2026-01-27 17:00:26.362536895 +0000 UTC m=+201.461510444" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.372175 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.403606 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk"] Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.406294 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf9f9fdf-89fmk"] Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.424639 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-58k4t" podStartSLOduration=2.479528993 podStartE2EDuration="38.42462011s" podCreationTimestamp="2026-01-27 16:59:48 +0000 UTC" firstStartedPulling="2026-01-27 16:59:49.814453002 +0000 UTC m=+164.913426551" lastFinishedPulling="2026-01-27 17:00:25.759544109 +0000 UTC m=+200.858517668" observedRunningTime="2026-01-27 17:00:26.423637112 +0000 UTC m=+201.522610661" watchObservedRunningTime="2026-01-27 17:00:26.42462011 +0000 UTC m=+201.523593659" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.529167 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.744255 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.997122 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5"] Jan 27 17:00:26 crc kubenswrapper[5049]: I0127 17:00:26.998142 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.000533 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.000844 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.001048 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.002183 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.002986 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.003117 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.014395 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5"] Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.052093 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t65tv\" (UniqueName: \"kubernetes.io/projected/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-kube-api-access-t65tv\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.052146 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-client-ca\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.052186 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-config\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.052374 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-serving-cert\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.154074 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-config\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.154321 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-serving-cert\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.155576 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-config\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.155753 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t65tv\" (UniqueName: \"kubernetes.io/projected/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-kube-api-access-t65tv\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.155836 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-client-ca\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.156541 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-client-ca\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.161574 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-serving-cert\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.173121 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t65tv\" (UniqueName: \"kubernetes.io/projected/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-kube-api-access-t65tv\") pod \"route-controller-manager-6bd6967b68-f9sb5\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.317933 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.344600 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5df4f7d6-1e02-4229-a7bf-ebacb415e32c","Type":"ContainerStarted","Data":"b08d9e6d831245fcf187159f240c1f4bb03be744a53094a2e71d80a37bb53c73"} Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.344634 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5df4f7d6-1e02-4229-a7bf-ebacb415e32c","Type":"ContainerStarted","Data":"9595686cc332c567ed0a698a8099f5731863d37279289d6bb8760382627188d5"} Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.594026 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.594006491 podStartE2EDuration="1.594006491s" podCreationTimestamp="2026-01-27 17:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:00:27.365888719 +0000 UTC m=+202.464862278" watchObservedRunningTime="2026-01-27 17:00:27.594006491 +0000 UTC m=+202.692980040" Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.598541 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5"] Jan 27 17:00:27 crc kubenswrapper[5049]: I0127 17:00:27.661564 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d98f76d-477e-4543-aae6-5015d3084a26" path="/var/lib/kubelet/pods/4d98f76d-477e-4543-aae6-5015d3084a26/volumes" Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.351421 5049 generic.go:334] "Generic (PLEG): container finished" podID="5df4f7d6-1e02-4229-a7bf-ebacb415e32c" containerID="b08d9e6d831245fcf187159f240c1f4bb03be744a53094a2e71d80a37bb53c73" exitCode=0 Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.351609 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5df4f7d6-1e02-4229-a7bf-ebacb415e32c","Type":"ContainerDied","Data":"b08d9e6d831245fcf187159f240c1f4bb03be744a53094a2e71d80a37bb53c73"} Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.355130 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" event={"ID":"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7","Type":"ContainerStarted","Data":"fb8dd557726264e1597e1383a7745b99a8afb14ecb3981ae2a10c89314e53848"} Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.355156 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" event={"ID":"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7","Type":"ContainerStarted","Data":"7068b12013dfed6209076dd8b1ed898ca3d96c7ede11484897265690fcbd24b2"} Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.355170 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.365705 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.545318 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.545376 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.952380 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 17:00:28 crc kubenswrapper[5049]: I0127 17:00:28.952812 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.646872 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.665086 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" podStartSLOduration=10.665052082 podStartE2EDuration="10.665052082s" podCreationTimestamp="2026-01-27 17:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:00:28.399162969 +0000 UTC m=+203.498136518" watchObservedRunningTime="2026-01-27 17:00:29.665052082 +0000 UTC m=+204.764025631" Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.679491 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-58k4t" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="registry-server" probeResult="failure" output=< Jan 27 17:00:29 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 17:00:29 crc kubenswrapper[5049]: > Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.701370 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kube-api-access\") pod \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\" (UID: \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\") " Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.701426 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kubelet-dir\") pod \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\" (UID: \"5df4f7d6-1e02-4229-a7bf-ebacb415e32c\") " Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.701708 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5df4f7d6-1e02-4229-a7bf-ebacb415e32c" (UID: "5df4f7d6-1e02-4229-a7bf-ebacb415e32c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.709879 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5df4f7d6-1e02-4229-a7bf-ebacb415e32c" (UID: "5df4f7d6-1e02-4229-a7bf-ebacb415e32c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.803381 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.803937 5049 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5df4f7d6-1e02-4229-a7bf-ebacb415e32c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:29 crc kubenswrapper[5049]: I0127 17:00:29.993819 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqrvs" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="registry-server" probeResult="failure" output=< Jan 27 17:00:29 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 17:00:29 crc kubenswrapper[5049]: > Jan 27 17:00:30 crc kubenswrapper[5049]: I0127 17:00:30.366972 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 17:00:30 crc kubenswrapper[5049]: I0127 17:00:30.374872 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5df4f7d6-1e02-4229-a7bf-ebacb415e32c","Type":"ContainerDied","Data":"9595686cc332c567ed0a698a8099f5731863d37279289d6bb8760382627188d5"} Jan 27 17:00:30 crc kubenswrapper[5049]: I0127 17:00:30.374922 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9595686cc332c567ed0a698a8099f5731863d37279289d6bb8760382627188d5" Jan 27 17:00:32 crc kubenswrapper[5049]: I0127 17:00:32.958552 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 17:00:32 crc kubenswrapper[5049]: E0127 17:00:32.959224 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df4f7d6-1e02-4229-a7bf-ebacb415e32c" containerName="pruner" Jan 27 17:00:32 crc kubenswrapper[5049]: I0127 17:00:32.959241 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df4f7d6-1e02-4229-a7bf-ebacb415e32c" containerName="pruner" Jan 27 17:00:32 crc kubenswrapper[5049]: I0127 17:00:32.959384 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df4f7d6-1e02-4229-a7bf-ebacb415e32c" containerName="pruner" Jan 27 17:00:32 crc kubenswrapper[5049]: I0127 17:00:32.959866 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:32 crc kubenswrapper[5049]: I0127 17:00:32.961887 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 17:00:32 crc kubenswrapper[5049]: I0127 17:00:32.962581 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 17:00:32 crc kubenswrapper[5049]: I0127 17:00:32.976136 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.055686 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/971a61c1-8167-465d-8012-9b19ba71bdce-kube-api-access\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.055758 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-kubelet-dir\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.055834 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-var-lock\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.157708 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-var-lock\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.157786 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/971a61c1-8167-465d-8012-9b19ba71bdce-kube-api-access\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.157819 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-kubelet-dir\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.157895 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-kubelet-dir\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.157890 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-var-lock\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.178763 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/971a61c1-8167-465d-8012-9b19ba71bdce-kube-api-access\") pod \"installer-9-crc\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.279924 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:00:33 crc kubenswrapper[5049]: I0127 17:00:33.721149 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 17:00:34 crc kubenswrapper[5049]: I0127 17:00:34.411323 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"971a61c1-8167-465d-8012-9b19ba71bdce","Type":"ContainerStarted","Data":"80a96c596543d23358dd9cc73227f37e48c462df0500c929a75abbcfc23425bd"} Jan 27 17:00:34 crc kubenswrapper[5049]: I0127 17:00:34.411640 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"971a61c1-8167-465d-8012-9b19ba71bdce","Type":"ContainerStarted","Data":"a8f335b4d2c29dfddc8398103d7568224548c17bf5557f3223f98e8bbdb6305f"} Jan 27 17:00:34 crc kubenswrapper[5049]: I0127 17:00:34.427746 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.427730996 podStartE2EDuration="2.427730996s" podCreationTimestamp="2026-01-27 17:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:00:34.425313955 +0000 UTC m=+209.524287504" watchObservedRunningTime="2026-01-27 17:00:34.427730996 +0000 UTC m=+209.526704545" Jan 27 17:00:38 crc kubenswrapper[5049]: I0127 17:00:38.457060 5049 generic.go:334] "Generic (PLEG): container finished" podID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerID="ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e" exitCode=0 Jan 27 17:00:38 crc kubenswrapper[5049]: I0127 17:00:38.457170 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcrq" event={"ID":"c0d9fed4-edc5-4f5e-9962-91a6382fb569","Type":"ContainerDied","Data":"ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e"} Jan 27 17:00:38 crc kubenswrapper[5049]: I0127 17:00:38.616388 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 17:00:38 crc kubenswrapper[5049]: I0127 17:00:38.657053 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.002818 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.050128 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.130695 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c64f79955-5h7jf"] Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.130995 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" podUID="5749b43d-b3a3-41f8-ae0d-f2bbe77299be" containerName="controller-manager" containerID="cri-o://6e16de84135f26b26e810f17e6af49cae1541105f8f170875b60a4cdd08dde83" gracePeriod=30 Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.135079 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5"] Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.135332 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" podUID="1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" containerName="route-controller-manager" containerID="cri-o://fb8dd557726264e1597e1383a7745b99a8afb14ecb3981ae2a10c89314e53848" gracePeriod=30 Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.465851 5049 generic.go:334] "Generic (PLEG): container finished" podID="5749b43d-b3a3-41f8-ae0d-f2bbe77299be" containerID="6e16de84135f26b26e810f17e6af49cae1541105f8f170875b60a4cdd08dde83" exitCode=0 Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.465915 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" event={"ID":"5749b43d-b3a3-41f8-ae0d-f2bbe77299be","Type":"ContainerDied","Data":"6e16de84135f26b26e810f17e6af49cae1541105f8f170875b60a4cdd08dde83"} Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.467439 5049 generic.go:334] "Generic (PLEG): container finished" podID="1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" containerID="fb8dd557726264e1597e1383a7745b99a8afb14ecb3981ae2a10c89314e53848" exitCode=0 Jan 27 17:00:39 crc kubenswrapper[5049]: I0127 17:00:39.468255 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" event={"ID":"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7","Type":"ContainerDied","Data":"fb8dd557726264e1597e1383a7745b99a8afb14ecb3981ae2a10c89314e53848"} Jan 27 17:00:40 crc kubenswrapper[5049]: I0127 17:00:40.881968 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqrvs"] Jan 27 17:00:40 crc kubenswrapper[5049]: I0127 17:00:40.882515 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqrvs" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="registry-server" containerID="cri-o://2ff0a8e67f1c9a6d5c5a8b51415acef4dc35db84660327e676edf71d949a7f87" gracePeriod=2 Jan 27 17:00:42 crc kubenswrapper[5049]: I0127 17:00:42.489840 5049 generic.go:334] "Generic (PLEG): container finished" podID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerID="2ff0a8e67f1c9a6d5c5a8b51415acef4dc35db84660327e676edf71d949a7f87" exitCode=0 Jan 27 17:00:42 crc kubenswrapper[5049]: I0127 17:00:42.489896 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqrvs" event={"ID":"bff41f89-4b87-4d1a-bf71-39ec568e3a0a","Type":"ContainerDied","Data":"2ff0a8e67f1c9a6d5c5a8b51415acef4dc35db84660327e676edf71d949a7f87"} Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.450152 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.457206 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.501536 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k"] Jan 27 17:00:44 crc kubenswrapper[5049]: E0127 17:00:44.501808 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" containerName="route-controller-manager" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.501825 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" containerName="route-controller-manager" Jan 27 17:00:44 crc kubenswrapper[5049]: E0127 17:00:44.501845 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5749b43d-b3a3-41f8-ae0d-f2bbe77299be" containerName="controller-manager" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.501852 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5749b43d-b3a3-41f8-ae0d-f2bbe77299be" containerName="controller-manager" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.501964 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5749b43d-b3a3-41f8-ae0d-f2bbe77299be" containerName="controller-manager" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.501991 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" containerName="route-controller-manager" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.502415 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.506828 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k"] Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526210 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-config\") pod \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526354 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-serving-cert\") pod \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526481 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t65tv\" (UniqueName: \"kubernetes.io/projected/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-kube-api-access-t65tv\") pod \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526508 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfrcj\" (UniqueName: \"kubernetes.io/projected/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-kube-api-access-mfrcj\") pod \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526543 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-serving-cert\") pod \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526566 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-proxy-ca-bundles\") pod \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526617 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-client-ca\") pod \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\" (UID: \"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526694 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-config\") pod \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526719 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-client-ca\") pod \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\" (UID: \"5749b43d-b3a3-41f8-ae0d-f2bbe77299be\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526894 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wztd9\" (UniqueName: \"kubernetes.io/projected/97a23007-dfab-4213-be44-cbc0ebd13e3a-kube-api-access-wztd9\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526921 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a23007-dfab-4213-be44-cbc0ebd13e3a-serving-cert\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526948 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-client-ca\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.526977 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-config\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.528321 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" (UID: "1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.528518 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-config" (OuterVolumeSpecName: "config") pod "5749b43d-b3a3-41f8-ae0d-f2bbe77299be" (UID: "5749b43d-b3a3-41f8-ae0d-f2bbe77299be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.528882 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-config" (OuterVolumeSpecName: "config") pod "1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" (UID: "1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.529096 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-client-ca" (OuterVolumeSpecName: "client-ca") pod "5749b43d-b3a3-41f8-ae0d-f2bbe77299be" (UID: "5749b43d-b3a3-41f8-ae0d-f2bbe77299be"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.529469 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5749b43d-b3a3-41f8-ae0d-f2bbe77299be" (UID: "5749b43d-b3a3-41f8-ae0d-f2bbe77299be"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.534695 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-kube-api-access-mfrcj" (OuterVolumeSpecName: "kube-api-access-mfrcj") pod "5749b43d-b3a3-41f8-ae0d-f2bbe77299be" (UID: "5749b43d-b3a3-41f8-ae0d-f2bbe77299be"). InnerVolumeSpecName "kube-api-access-mfrcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.537442 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5749b43d-b3a3-41f8-ae0d-f2bbe77299be" (UID: "5749b43d-b3a3-41f8-ae0d-f2bbe77299be"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.542847 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" event={"ID":"5749b43d-b3a3-41f8-ae0d-f2bbe77299be","Type":"ContainerDied","Data":"00626a3cde5ff39eeb8a2ad6e694d16736f68a3d579f00f84da84aa632129e9c"} Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.542922 5049 scope.go:117] "RemoveContainer" containerID="6e16de84135f26b26e810f17e6af49cae1541105f8f170875b60a4cdd08dde83" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.543090 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c64f79955-5h7jf" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.556130 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" (UID: "1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.556525 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" event={"ID":"1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7","Type":"ContainerDied","Data":"7068b12013dfed6209076dd8b1ed898ca3d96c7ede11484897265690fcbd24b2"} Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.556604 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.557787 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-kube-api-access-t65tv" (OuterVolumeSpecName: "kube-api-access-t65tv") pod "1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" (UID: "1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7"). InnerVolumeSpecName "kube-api-access-t65tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.593490 5049 scope.go:117] "RemoveContainer" containerID="fb8dd557726264e1597e1383a7745b99a8afb14ecb3981ae2a10c89314e53848" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.609299 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c64f79955-5h7jf"] Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.611718 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c64f79955-5h7jf"] Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635292 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-config\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635424 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wztd9\" (UniqueName: \"kubernetes.io/projected/97a23007-dfab-4213-be44-cbc0ebd13e3a-kube-api-access-wztd9\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635457 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a23007-dfab-4213-be44-cbc0ebd13e3a-serving-cert\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635500 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-client-ca\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635568 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635581 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635594 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635606 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635620 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t65tv\" (UniqueName: \"kubernetes.io/projected/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-kube-api-access-t65tv\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635634 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfrcj\" (UniqueName: \"kubernetes.io/projected/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-kube-api-access-mfrcj\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635644 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635656 5049 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5749b43d-b3a3-41f8-ae0d-f2bbe77299be-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.635684 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.636765 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-client-ca\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.638272 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-config\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.671451 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wztd9\" (UniqueName: \"kubernetes.io/projected/97a23007-dfab-4213-be44-cbc0ebd13e3a-kube-api-access-wztd9\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.673529 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a23007-dfab-4213-be44-cbc0ebd13e3a-serving-cert\") pod \"route-controller-manager-6bd9fc98f5-4wf8k\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.777165 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.836375 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.849270 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhqvp\" (UniqueName: \"kubernetes.io/projected/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-kube-api-access-xhqvp\") pod \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.849690 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-utilities\") pod \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.849850 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-catalog-content\") pod \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\" (UID: \"bff41f89-4b87-4d1a-bf71-39ec568e3a0a\") " Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.851246 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-utilities" (OuterVolumeSpecName: "utilities") pod "bff41f89-4b87-4d1a-bf71-39ec568e3a0a" (UID: "bff41f89-4b87-4d1a-bf71-39ec568e3a0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.865626 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-kube-api-access-xhqvp" (OuterVolumeSpecName: "kube-api-access-xhqvp") pod "bff41f89-4b87-4d1a-bf71-39ec568e3a0a" (UID: "bff41f89-4b87-4d1a-bf71-39ec568e3a0a"). InnerVolumeSpecName "kube-api-access-xhqvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.904761 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5"] Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.907646 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd6967b68-f9sb5"] Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.953657 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:44 crc kubenswrapper[5049]: I0127 17:00:44.954164 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhqvp\" (UniqueName: \"kubernetes.io/projected/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-kube-api-access-xhqvp\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.039816 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bff41f89-4b87-4d1a-bf71-39ec568e3a0a" (UID: "bff41f89-4b87-4d1a-bf71-39ec568e3a0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.056071 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff41f89-4b87-4d1a-bf71-39ec568e3a0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.145560 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k"] Jan 27 17:00:45 crc kubenswrapper[5049]: W0127 17:00:45.289863 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97a23007_dfab_4213_be44_cbc0ebd13e3a.slice/crio-dc8e2b99b0649c7b2464fa293ffb465e29f3e7d31f7bf67acf73f39f9ea1a57d WatchSource:0}: Error finding container dc8e2b99b0649c7b2464fa293ffb465e29f3e7d31f7bf67acf73f39f9ea1a57d: Status 404 returned error can't find the container with id dc8e2b99b0649c7b2464fa293ffb465e29f3e7d31f7bf67acf73f39f9ea1a57d Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.565337 5049 generic.go:334] "Generic (PLEG): container finished" podID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerID="99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0" exitCode=0 Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.565409 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w8k7" event={"ID":"11537c3a-0298-48bc-a5f8-b79fe47c9cd5","Type":"ContainerDied","Data":"99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.567580 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" event={"ID":"97a23007-dfab-4213-be44-cbc0ebd13e3a","Type":"ContainerStarted","Data":"574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.567611 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" event={"ID":"97a23007-dfab-4213-be44-cbc0ebd13e3a","Type":"ContainerStarted","Data":"dc8e2b99b0649c7b2464fa293ffb465e29f3e7d31f7bf67acf73f39f9ea1a57d"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.567828 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.571546 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcrq" event={"ID":"c0d9fed4-edc5-4f5e-9962-91a6382fb569","Type":"ContainerStarted","Data":"e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.575165 5049 generic.go:334] "Generic (PLEG): container finished" podID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerID="11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f" exitCode=0 Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.575291 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g667" event={"ID":"50c0ecb4-7212-4c52-ba39-4fb298404899","Type":"ContainerDied","Data":"11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.589427 5049 generic.go:334] "Generic (PLEG): container finished" podID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerID="1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712" exitCode=0 Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.589535 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkf2s" event={"ID":"7bada626-6ad8-4fad-8649-0b9f3497e68e","Type":"ContainerDied","Data":"1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.595206 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqrvs" event={"ID":"bff41f89-4b87-4d1a-bf71-39ec568e3a0a","Type":"ContainerDied","Data":"7d78dbae317860cf93806b52f1f1b4e79d2308e5ba0add4f19489db7438baa6e"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.595260 5049 scope.go:117] "RemoveContainer" containerID="2ff0a8e67f1c9a6d5c5a8b51415acef4dc35db84660327e676edf71d949a7f87" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.595557 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqrvs" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.598645 5049 generic.go:334] "Generic (PLEG): container finished" podID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerID="699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813" exitCode=0 Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.598732 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwn2r" event={"ID":"6841cc70-80cd-499f-a8e6-e2a9031dcbf0","Type":"ContainerDied","Data":"699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.603719 5049 generic.go:334] "Generic (PLEG): container finished" podID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerID="f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e" exitCode=0 Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.604202 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjtk" event={"ID":"c45b66c7-0a92-456f-927a-fe596ffdedb3","Type":"ContainerDied","Data":"f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e"} Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.619478 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" podStartSLOduration=6.619452245 podStartE2EDuration="6.619452245s" podCreationTimestamp="2026-01-27 17:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:00:45.614584173 +0000 UTC m=+220.713557732" watchObservedRunningTime="2026-01-27 17:00:45.619452245 +0000 UTC m=+220.718425814" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.621953 5049 scope.go:117] "RemoveContainer" containerID="8236f24b6e9221e3241c15a24f0ea7c54c371fac11d50415c485d091bac4922f" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.644409 5049 scope.go:117] "RemoveContainer" containerID="3eda3d6abde26392c8061c9e1d27e32bf31ded14cfcef2aa495f130ec31cfcaa" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.653787 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8rcrq" podStartSLOduration=3.8621175340000002 podStartE2EDuration="1m0.653739138s" podCreationTimestamp="2026-01-27 16:59:45 +0000 UTC" firstStartedPulling="2026-01-27 16:59:47.666166496 +0000 UTC m=+162.765140045" lastFinishedPulling="2026-01-27 17:00:44.4577881 +0000 UTC m=+219.556761649" observedRunningTime="2026-01-27 17:00:45.64934707 +0000 UTC m=+220.748320629" watchObservedRunningTime="2026-01-27 17:00:45.653739138 +0000 UTC m=+220.752712687" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.675986 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7" path="/var/lib/kubelet/pods/1a7b4e1e-98e6-456a-87d1-1ec41dbbc3f7/volumes" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.676745 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5749b43d-b3a3-41f8-ae0d-f2bbe77299be" path="/var/lib/kubelet/pods/5749b43d-b3a3-41f8-ae0d-f2bbe77299be/volumes" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.736138 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqrvs"] Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.741581 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqrvs"] Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.743350 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 17:00:45 crc kubenswrapper[5049]: I0127 17:00:45.743411 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.319789 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.617805 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w8k7" event={"ID":"11537c3a-0298-48bc-a5f8-b79fe47c9cd5","Type":"ContainerStarted","Data":"ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d"} Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.622840 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkf2s" event={"ID":"7bada626-6ad8-4fad-8649-0b9f3497e68e","Type":"ContainerStarted","Data":"d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46"} Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.626369 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g667" event={"ID":"50c0ecb4-7212-4c52-ba39-4fb298404899","Type":"ContainerStarted","Data":"487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76"} Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.628905 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwn2r" event={"ID":"6841cc70-80cd-499f-a8e6-e2a9031dcbf0","Type":"ContainerStarted","Data":"16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d"} Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.631829 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjtk" event={"ID":"c45b66c7-0a92-456f-927a-fe596ffdedb3","Type":"ContainerStarted","Data":"b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a"} Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.655538 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5w8k7" podStartSLOduration=2.420623386 podStartE2EDuration="59.655511446s" podCreationTimestamp="2026-01-27 16:59:47 +0000 UTC" firstStartedPulling="2026-01-27 16:59:48.782501743 +0000 UTC m=+163.881475292" lastFinishedPulling="2026-01-27 17:00:46.017389803 +0000 UTC m=+221.116363352" observedRunningTime="2026-01-27 17:00:46.636363326 +0000 UTC m=+221.735336875" watchObservedRunningTime="2026-01-27 17:00:46.655511446 +0000 UTC m=+221.754484995" Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.657535 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bwn2r" podStartSLOduration=2.137617179 podStartE2EDuration="1m1.657529145s" podCreationTimestamp="2026-01-27 16:59:45 +0000 UTC" firstStartedPulling="2026-01-27 16:59:46.589646833 +0000 UTC m=+161.688620382" lastFinishedPulling="2026-01-27 17:00:46.109558799 +0000 UTC m=+221.208532348" observedRunningTime="2026-01-27 17:00:46.654734433 +0000 UTC m=+221.753707992" watchObservedRunningTime="2026-01-27 17:00:46.657529145 +0000 UTC m=+221.756502694" Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.675066 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jkf2s" podStartSLOduration=3.306809417 podStartE2EDuration="1m1.675037247s" podCreationTimestamp="2026-01-27 16:59:45 +0000 UTC" firstStartedPulling="2026-01-27 16:59:47.699911827 +0000 UTC m=+162.798885376" lastFinishedPulling="2026-01-27 17:00:46.068139657 +0000 UTC m=+221.167113206" observedRunningTime="2026-01-27 17:00:46.672521534 +0000 UTC m=+221.771495093" watchObservedRunningTime="2026-01-27 17:00:46.675037247 +0000 UTC m=+221.774010806" Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.692921 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xdjtk" podStartSLOduration=2.341017753 podStartE2EDuration="59.692905s" podCreationTimestamp="2026-01-27 16:59:47 +0000 UTC" firstStartedPulling="2026-01-27 16:59:48.776467661 +0000 UTC m=+163.875441210" lastFinishedPulling="2026-01-27 17:00:46.128354908 +0000 UTC m=+221.227328457" observedRunningTime="2026-01-27 17:00:46.68880101 +0000 UTC m=+221.787774559" watchObservedRunningTime="2026-01-27 17:00:46.692905 +0000 UTC m=+221.791878549" Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.714742 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6g667" podStartSLOduration=3.261423269 podStartE2EDuration="1m2.714716988s" podCreationTimestamp="2026-01-27 16:59:44 +0000 UTC" firstStartedPulling="2026-01-27 16:59:46.584319035 +0000 UTC m=+161.683292584" lastFinishedPulling="2026-01-27 17:00:46.037612754 +0000 UTC m=+221.136586303" observedRunningTime="2026-01-27 17:00:46.710278818 +0000 UTC m=+221.809252377" watchObservedRunningTime="2026-01-27 17:00:46.714716988 +0000 UTC m=+221.813690537" Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.791806 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tn44m"] Jan 27 17:00:46 crc kubenswrapper[5049]: I0127 17:00:46.795129 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8rcrq" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="registry-server" probeResult="failure" output=< Jan 27 17:00:46 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 17:00:46 crc kubenswrapper[5049]: > Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.014042 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts"] Jan 27 17:00:47 crc kubenswrapper[5049]: E0127 17:00:47.014332 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="extract-content" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.014351 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="extract-content" Jan 27 17:00:47 crc kubenswrapper[5049]: E0127 17:00:47.014365 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="extract-utilities" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.014373 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="extract-utilities" Jan 27 17:00:47 crc kubenswrapper[5049]: E0127 17:00:47.014396 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="registry-server" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.014402 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="registry-server" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.014493 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" containerName="registry-server" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.015095 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.020061 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.020962 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.021433 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.021537 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.021581 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.024787 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.029000 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.032958 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts"] Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.096860 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-config\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.096976 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rwk\" (UniqueName: \"kubernetes.io/projected/f4631f59-6443-41b9-9578-6115002922bf-kube-api-access-g5rwk\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.097055 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4631f59-6443-41b9-9578-6115002922bf-serving-cert\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.097098 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-proxy-ca-bundles\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.097127 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-client-ca\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.198948 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4631f59-6443-41b9-9578-6115002922bf-serving-cert\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.199038 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-proxy-ca-bundles\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.199078 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-client-ca\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.199106 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-config\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.199341 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5rwk\" (UniqueName: \"kubernetes.io/projected/f4631f59-6443-41b9-9578-6115002922bf-kube-api-access-g5rwk\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.200970 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-config\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.201203 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-client-ca\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.202333 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-proxy-ca-bundles\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.221909 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4631f59-6443-41b9-9578-6115002922bf-serving-cert\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.222049 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5rwk\" (UniqueName: \"kubernetes.io/projected/f4631f59-6443-41b9-9578-6115002922bf-kube-api-access-g5rwk\") pod \"controller-manager-5fd94cbdfb-mj7ts\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.333067 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.504398 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.504949 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.622244 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts"] Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.640799 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" event={"ID":"f4631f59-6443-41b9-9578-6115002922bf","Type":"ContainerStarted","Data":"a3bfc45082c6e4c5428be28ff3911cb8d3430c8972c163d94b2338126fe179f2"} Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.657007 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff41f89-4b87-4d1a-bf71-39ec568e3a0a" path="/var/lib/kubelet/pods/bff41f89-4b87-4d1a-bf71-39ec568e3a0a/volumes" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.781158 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.781251 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.781323 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.782601 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.782704 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701" gracePeriod=600 Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.954208 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 17:00:47 crc kubenswrapper[5049]: I0127 17:00:47.954864 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 17:00:48 crc kubenswrapper[5049]: I0127 17:00:48.586544 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xdjtk" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="registry-server" probeResult="failure" output=< Jan 27 17:00:48 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 17:00:48 crc kubenswrapper[5049]: > Jan 27 17:00:48 crc kubenswrapper[5049]: I0127 17:00:48.647097 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701" exitCode=0 Jan 27 17:00:48 crc kubenswrapper[5049]: I0127 17:00:48.647149 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701"} Jan 27 17:00:48 crc kubenswrapper[5049]: I0127 17:00:48.647176 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"ea459288717ffc93888a4def2a377fe87f81dd7aad6264194cff040e79562fcf"} Jan 27 17:00:48 crc kubenswrapper[5049]: I0127 17:00:48.649172 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" event={"ID":"f4631f59-6443-41b9-9578-6115002922bf","Type":"ContainerStarted","Data":"f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a"} Jan 27 17:00:48 crc kubenswrapper[5049]: I0127 17:00:48.649441 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:48 crc kubenswrapper[5049]: I0127 17:00:48.656819 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:48 crc kubenswrapper[5049]: I0127 17:00:48.997402 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-5w8k7" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="registry-server" probeResult="failure" output=< Jan 27 17:00:48 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 17:00:48 crc kubenswrapper[5049]: > Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.323841 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6g667" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.324815 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6g667" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.388291 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6g667" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.418440 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" podStartSLOduration=16.418410961 podStartE2EDuration="16.418410961s" podCreationTimestamp="2026-01-27 17:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:00:48.686182927 +0000 UTC m=+223.785156476" watchObservedRunningTime="2026-01-27 17:00:55.418410961 +0000 UTC m=+230.517384550" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.781575 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6g667" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.801897 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.802933 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bwn2r" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.802997 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bwn2r" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.860250 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.863649 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bwn2r" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.933107 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jkf2s" Jan 27 17:00:55 crc kubenswrapper[5049]: I0127 17:00:55.933594 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jkf2s" Jan 27 17:00:56 crc kubenswrapper[5049]: I0127 17:00:56.000930 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jkf2s" Jan 27 17:00:56 crc kubenswrapper[5049]: I0127 17:00:56.798514 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bwn2r" Jan 27 17:00:56 crc kubenswrapper[5049]: I0127 17:00:56.810311 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jkf2s" Jan 27 17:00:57 crc kubenswrapper[5049]: I0127 17:00:57.237758 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rcrq"] Jan 27 17:00:57 crc kubenswrapper[5049]: I0127 17:00:57.556984 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 17:00:57 crc kubenswrapper[5049]: I0127 17:00:57.617526 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 17:00:57 crc kubenswrapper[5049]: I0127 17:00:57.724086 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8rcrq" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="registry-server" containerID="cri-o://e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5" gracePeriod=2 Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.001283 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.051748 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.233746 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.240171 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jkf2s"] Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.295386 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g468n\" (UniqueName: \"kubernetes.io/projected/c0d9fed4-edc5-4f5e-9962-91a6382fb569-kube-api-access-g468n\") pod \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.295446 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-catalog-content\") pod \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.295472 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-utilities\") pod \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\" (UID: \"c0d9fed4-edc5-4f5e-9962-91a6382fb569\") " Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.296569 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-utilities" (OuterVolumeSpecName: "utilities") pod "c0d9fed4-edc5-4f5e-9962-91a6382fb569" (UID: "c0d9fed4-edc5-4f5e-9962-91a6382fb569"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.305150 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d9fed4-edc5-4f5e-9962-91a6382fb569-kube-api-access-g468n" (OuterVolumeSpecName: "kube-api-access-g468n") pod "c0d9fed4-edc5-4f5e-9962-91a6382fb569" (UID: "c0d9fed4-edc5-4f5e-9962-91a6382fb569"). InnerVolumeSpecName "kube-api-access-g468n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.349329 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0d9fed4-edc5-4f5e-9962-91a6382fb569" (UID: "c0d9fed4-edc5-4f5e-9962-91a6382fb569"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.397552 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.397628 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0d9fed4-edc5-4f5e-9962-91a6382fb569-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.397648 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g468n\" (UniqueName: \"kubernetes.io/projected/c0d9fed4-edc5-4f5e-9962-91a6382fb569-kube-api-access-g468n\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.732176 5049 generic.go:334] "Generic (PLEG): container finished" podID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerID="e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5" exitCode=0 Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.732237 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcrq" event={"ID":"c0d9fed4-edc5-4f5e-9962-91a6382fb569","Type":"ContainerDied","Data":"e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5"} Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.732306 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rcrq" event={"ID":"c0d9fed4-edc5-4f5e-9962-91a6382fb569","Type":"ContainerDied","Data":"21a204716a6d5c123d2c9861c508a8e9e3ccb872e4279e2917541bb931cf2d5f"} Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.732328 5049 scope.go:117] "RemoveContainer" containerID="e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.732321 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rcrq" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.732608 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jkf2s" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerName="registry-server" containerID="cri-o://d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46" gracePeriod=2 Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.752891 5049 scope.go:117] "RemoveContainer" containerID="ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.776806 5049 scope.go:117] "RemoveContainer" containerID="918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.777109 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rcrq"] Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.779596 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8rcrq"] Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.854338 5049 scope.go:117] "RemoveContainer" containerID="e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5" Jan 27 17:00:58 crc kubenswrapper[5049]: E0127 17:00:58.854786 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5\": container with ID starting with e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5 not found: ID does not exist" containerID="e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.854828 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5"} err="failed to get container status \"e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5\": rpc error: code = NotFound desc = could not find container \"e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5\": container with ID starting with e68c8feb5c892651db0c09bc76a74f426bf8005b7c9e60ab9022fc9bfc7509f5 not found: ID does not exist" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.854858 5049 scope.go:117] "RemoveContainer" containerID="ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e" Jan 27 17:00:58 crc kubenswrapper[5049]: E0127 17:00:58.855416 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e\": container with ID starting with ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e not found: ID does not exist" containerID="ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.855456 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e"} err="failed to get container status \"ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e\": rpc error: code = NotFound desc = could not find container \"ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e\": container with ID starting with ddc880d27d88be6b801580722e104e70408203b64d023636b9f4ebfc8a697b3e not found: ID does not exist" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.855505 5049 scope.go:117] "RemoveContainer" containerID="918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f" Jan 27 17:00:58 crc kubenswrapper[5049]: E0127 17:00:58.856076 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f\": container with ID starting with 918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f not found: ID does not exist" containerID="918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f" Jan 27 17:00:58 crc kubenswrapper[5049]: I0127 17:00:58.856101 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f"} err="failed to get container status \"918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f\": rpc error: code = NotFound desc = could not find container \"918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f\": container with ID starting with 918888ab9064441dc7545abaeb6479c4bf123a4b91011cebd74490619e72193f not found: ID does not exist" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.112307 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts"] Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.112969 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" podUID="f4631f59-6443-41b9-9578-6115002922bf" containerName="controller-manager" containerID="cri-o://f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a" gracePeriod=30 Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.209374 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k"] Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.209651 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" podUID="97a23007-dfab-4213-be44-cbc0ebd13e3a" containerName="route-controller-manager" containerID="cri-o://574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c" gracePeriod=30 Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.258818 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkf2s" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.312380 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t9m2\" (UniqueName: \"kubernetes.io/projected/7bada626-6ad8-4fad-8649-0b9f3497e68e-kube-api-access-4t9m2\") pod \"7bada626-6ad8-4fad-8649-0b9f3497e68e\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.312451 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-catalog-content\") pod \"7bada626-6ad8-4fad-8649-0b9f3497e68e\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.312510 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-utilities\") pod \"7bada626-6ad8-4fad-8649-0b9f3497e68e\" (UID: \"7bada626-6ad8-4fad-8649-0b9f3497e68e\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.313785 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-utilities" (OuterVolumeSpecName: "utilities") pod "7bada626-6ad8-4fad-8649-0b9f3497e68e" (UID: "7bada626-6ad8-4fad-8649-0b9f3497e68e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.318894 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bada626-6ad8-4fad-8649-0b9f3497e68e-kube-api-access-4t9m2" (OuterVolumeSpecName: "kube-api-access-4t9m2") pod "7bada626-6ad8-4fad-8649-0b9f3497e68e" (UID: "7bada626-6ad8-4fad-8649-0b9f3497e68e"). InnerVolumeSpecName "kube-api-access-4t9m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.377311 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bada626-6ad8-4fad-8649-0b9f3497e68e" (UID: "7bada626-6ad8-4fad-8649-0b9f3497e68e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.414516 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t9m2\" (UniqueName: \"kubernetes.io/projected/7bada626-6ad8-4fad-8649-0b9f3497e68e-kube-api-access-4t9m2\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.414549 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.414559 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bada626-6ad8-4fad-8649-0b9f3497e68e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.562139 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.616910 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-config\") pod \"f4631f59-6443-41b9-9578-6115002922bf\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.616992 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-proxy-ca-bundles\") pod \"f4631f59-6443-41b9-9578-6115002922bf\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.617032 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5rwk\" (UniqueName: \"kubernetes.io/projected/f4631f59-6443-41b9-9578-6115002922bf-kube-api-access-g5rwk\") pod \"f4631f59-6443-41b9-9578-6115002922bf\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.617159 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-client-ca\") pod \"f4631f59-6443-41b9-9578-6115002922bf\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.617191 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4631f59-6443-41b9-9578-6115002922bf-serving-cert\") pod \"f4631f59-6443-41b9-9578-6115002922bf\" (UID: \"f4631f59-6443-41b9-9578-6115002922bf\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.618090 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f4631f59-6443-41b9-9578-6115002922bf" (UID: "f4631f59-6443-41b9-9578-6115002922bf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.618193 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-config" (OuterVolumeSpecName: "config") pod "f4631f59-6443-41b9-9578-6115002922bf" (UID: "f4631f59-6443-41b9-9578-6115002922bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.618237 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "f4631f59-6443-41b9-9578-6115002922bf" (UID: "f4631f59-6443-41b9-9578-6115002922bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.620394 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4631f59-6443-41b9-9578-6115002922bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f4631f59-6443-41b9-9578-6115002922bf" (UID: "f4631f59-6443-41b9-9578-6115002922bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.620604 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4631f59-6443-41b9-9578-6115002922bf-kube-api-access-g5rwk" (OuterVolumeSpecName: "kube-api-access-g5rwk") pod "f4631f59-6443-41b9-9578-6115002922bf" (UID: "f4631f59-6443-41b9-9578-6115002922bf"). InnerVolumeSpecName "kube-api-access-g5rwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.646235 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.652986 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" path="/var/lib/kubelet/pods/c0d9fed4-edc5-4f5e-9962-91a6382fb569/volumes" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.718874 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a23007-dfab-4213-be44-cbc0ebd13e3a-serving-cert\") pod \"97a23007-dfab-4213-be44-cbc0ebd13e3a\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.718925 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-client-ca\") pod \"97a23007-dfab-4213-be44-cbc0ebd13e3a\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.718977 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-config\") pod \"97a23007-dfab-4213-be44-cbc0ebd13e3a\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.718998 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wztd9\" (UniqueName: \"kubernetes.io/projected/97a23007-dfab-4213-be44-cbc0ebd13e3a-kube-api-access-wztd9\") pod \"97a23007-dfab-4213-be44-cbc0ebd13e3a\" (UID: \"97a23007-dfab-4213-be44-cbc0ebd13e3a\") " Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.719288 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.719306 5049 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.719318 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5rwk\" (UniqueName: \"kubernetes.io/projected/f4631f59-6443-41b9-9578-6115002922bf-kube-api-access-g5rwk\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.719327 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4631f59-6443-41b9-9578-6115002922bf-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.719335 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4631f59-6443-41b9-9578-6115002922bf-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.720042 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-config" (OuterVolumeSpecName: "config") pod "97a23007-dfab-4213-be44-cbc0ebd13e3a" (UID: "97a23007-dfab-4213-be44-cbc0ebd13e3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.720087 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "97a23007-dfab-4213-be44-cbc0ebd13e3a" (UID: "97a23007-dfab-4213-be44-cbc0ebd13e3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.721752 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a23007-dfab-4213-be44-cbc0ebd13e3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97a23007-dfab-4213-be44-cbc0ebd13e3a" (UID: "97a23007-dfab-4213-be44-cbc0ebd13e3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.722214 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a23007-dfab-4213-be44-cbc0ebd13e3a-kube-api-access-wztd9" (OuterVolumeSpecName: "kube-api-access-wztd9") pod "97a23007-dfab-4213-be44-cbc0ebd13e3a" (UID: "97a23007-dfab-4213-be44-cbc0ebd13e3a"). InnerVolumeSpecName "kube-api-access-wztd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.738522 5049 generic.go:334] "Generic (PLEG): container finished" podID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerID="d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46" exitCode=0 Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.738589 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkf2s" event={"ID":"7bada626-6ad8-4fad-8649-0b9f3497e68e","Type":"ContainerDied","Data":"d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46"} Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.738653 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkf2s" event={"ID":"7bada626-6ad8-4fad-8649-0b9f3497e68e","Type":"ContainerDied","Data":"3f1f482e415240fffa9bcd3d9eb5dd1675103b0d59a6619e3b275a944fbdb9cb"} Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.738600 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkf2s" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.738710 5049 scope.go:117] "RemoveContainer" containerID="d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.740544 5049 generic.go:334] "Generic (PLEG): container finished" podID="97a23007-dfab-4213-be44-cbc0ebd13e3a" containerID="574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c" exitCode=0 Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.740616 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" event={"ID":"97a23007-dfab-4213-be44-cbc0ebd13e3a","Type":"ContainerDied","Data":"574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c"} Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.740646 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" event={"ID":"97a23007-dfab-4213-be44-cbc0ebd13e3a","Type":"ContainerDied","Data":"dc8e2b99b0649c7b2464fa293ffb465e29f3e7d31f7bf67acf73f39f9ea1a57d"} Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.740704 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.744125 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4631f59-6443-41b9-9578-6115002922bf" containerID="f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a" exitCode=0 Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.744176 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.744176 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" event={"ID":"f4631f59-6443-41b9-9578-6115002922bf","Type":"ContainerDied","Data":"f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a"} Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.744332 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts" event={"ID":"f4631f59-6443-41b9-9578-6115002922bf","Type":"ContainerDied","Data":"a3bfc45082c6e4c5428be28ff3911cb8d3430c8972c163d94b2338126fe179f2"} Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.757705 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jkf2s"] Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.759784 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jkf2s"] Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.771104 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k"] Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.775030 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bd9fc98f5-4wf8k"] Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.778425 5049 scope.go:117] "RemoveContainer" containerID="1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.779169 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts"] Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.782830 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fd94cbdfb-mj7ts"] Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.794373 5049 scope.go:117] "RemoveContainer" containerID="e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.808517 5049 scope.go:117] "RemoveContainer" containerID="d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46" Jan 27 17:00:59 crc kubenswrapper[5049]: E0127 17:00:59.808916 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46\": container with ID starting with d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46 not found: ID does not exist" containerID="d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.808954 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46"} err="failed to get container status \"d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46\": rpc error: code = NotFound desc = could not find container \"d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46\": container with ID starting with d918e3a91942f37f25d7cf5e1abb24150d41ab4450040aeeb743376ad2671d46 not found: ID does not exist" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.808987 5049 scope.go:117] "RemoveContainer" containerID="1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712" Jan 27 17:00:59 crc kubenswrapper[5049]: E0127 17:00:59.809178 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712\": container with ID starting with 1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712 not found: ID does not exist" containerID="1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.809203 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712"} err="failed to get container status \"1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712\": rpc error: code = NotFound desc = could not find container \"1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712\": container with ID starting with 1b9f32da270aa325b4217f2bbc20d6e5c755d8f5f99988225a5dc48d803c9712 not found: ID does not exist" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.809220 5049 scope.go:117] "RemoveContainer" containerID="e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3" Jan 27 17:00:59 crc kubenswrapper[5049]: E0127 17:00:59.809403 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3\": container with ID starting with e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3 not found: ID does not exist" containerID="e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.809423 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3"} err="failed to get container status \"e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3\": rpc error: code = NotFound desc = could not find container \"e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3\": container with ID starting with e0c819dc354ceeade8b1b86eb5c8b22fe9751edd1b79f40ace67ba586be640d3 not found: ID does not exist" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.809435 5049 scope.go:117] "RemoveContainer" containerID="574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.820971 5049 scope.go:117] "RemoveContainer" containerID="574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.821248 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.821266 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wztd9\" (UniqueName: \"kubernetes.io/projected/97a23007-dfab-4213-be44-cbc0ebd13e3a-kube-api-access-wztd9\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.821281 5049 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97a23007-dfab-4213-be44-cbc0ebd13e3a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.821294 5049 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97a23007-dfab-4213-be44-cbc0ebd13e3a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:00:59 crc kubenswrapper[5049]: E0127 17:00:59.821362 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c\": container with ID starting with 574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c not found: ID does not exist" containerID="574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.821380 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c"} err="failed to get container status \"574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c\": rpc error: code = NotFound desc = could not find container \"574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c\": container with ID starting with 574af29a78f7cd953270bcb0fc25fd58616e69b388c5691846be6de83d00725c not found: ID does not exist" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.821396 5049 scope.go:117] "RemoveContainer" containerID="f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.836845 5049 scope.go:117] "RemoveContainer" containerID="f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a" Jan 27 17:00:59 crc kubenswrapper[5049]: E0127 17:00:59.837463 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a\": container with ID starting with f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a not found: ID does not exist" containerID="f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a" Jan 27 17:00:59 crc kubenswrapper[5049]: I0127 17:00:59.837493 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a"} err="failed to get container status \"f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a\": rpc error: code = NotFound desc = could not find container \"f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a\": container with ID starting with f0ef53ad47ec0a1ef4ce93c4b007da46316db7d1ecbf13a7b5a9c5834d508e6a not found: ID does not exist" Jan 27 17:01:00 crc kubenswrapper[5049]: I0127 17:01:00.638459 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5w8k7"] Jan 27 17:01:00 crc kubenswrapper[5049]: I0127 17:01:00.638935 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5w8k7" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="registry-server" containerID="cri-o://ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d" gracePeriod=2 Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.040954 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86957cd6cd-m2zg5"] Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.041936 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="extract-utilities" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.041976 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="extract-utilities" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.042026 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a23007-dfab-4213-be44-cbc0ebd13e3a" containerName="route-controller-manager" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042045 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a23007-dfab-4213-be44-cbc0ebd13e3a" containerName="route-controller-manager" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.042085 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerName="extract-utilities" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042106 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerName="extract-utilities" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.042152 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="extract-content" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042172 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="extract-content" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.042191 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4631f59-6443-41b9-9578-6115002922bf" containerName="controller-manager" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042209 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4631f59-6443-41b9-9578-6115002922bf" containerName="controller-manager" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.042251 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="registry-server" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042271 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="registry-server" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.042313 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerName="extract-content" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042341 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerName="extract-content" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.042393 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerName="registry-server" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042412 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerName="registry-server" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042937 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d9fed4-edc5-4f5e-9962-91a6382fb569" containerName="registry-server" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.042988 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a23007-dfab-4213-be44-cbc0ebd13e3a" containerName="route-controller-manager" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.043013 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" containerName="registry-server" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.043037 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4631f59-6443-41b9-9578-6115002922bf" containerName="controller-manager" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.044793 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.051966 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.052125 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.052340 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.052581 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.052749 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.053313 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.062021 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.062424 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd"] Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.063578 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.069578 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.069927 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.070134 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.070472 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.070664 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.070903 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.084659 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86957cd6cd-m2zg5"] Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.089848 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd"] Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.142828 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-config\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.142881 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f62m\" (UniqueName: \"kubernetes.io/projected/b542a9bd-d4e4-466f-9c44-154cddf848b6-kube-api-access-7f62m\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.143085 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.143125 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9863fc2f-2307-434d-b806-d4afc97866a5-client-ca\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.143247 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9863fc2f-2307-434d-b806-d4afc97866a5-config\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.143538 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9863fc2f-2307-434d-b806-d4afc97866a5-serving-cert\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.143566 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b2p8\" (UniqueName: \"kubernetes.io/projected/9863fc2f-2307-434d-b806-d4afc97866a5-kube-api-access-9b2p8\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.143627 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-client-ca\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.143649 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b542a9bd-d4e4-466f-9c44-154cddf848b6-serving-cert\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.143667 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-proxy-ca-bundles\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.245412 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-utilities\") pod \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.245936 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km6xd\" (UniqueName: \"kubernetes.io/projected/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-kube-api-access-km6xd\") pod \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.246239 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-catalog-content\") pod \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\" (UID: \"11537c3a-0298-48bc-a5f8-b79fe47c9cd5\") " Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.246657 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9863fc2f-2307-434d-b806-d4afc97866a5-config\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.246982 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9863fc2f-2307-434d-b806-d4afc97866a5-serving-cert\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.247209 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b2p8\" (UniqueName: \"kubernetes.io/projected/9863fc2f-2307-434d-b806-d4afc97866a5-kube-api-access-9b2p8\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.247444 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-client-ca\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.247622 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b542a9bd-d4e4-466f-9c44-154cddf848b6-serving-cert\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.247848 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-proxy-ca-bundles\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.248134 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-config\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.248342 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f62m\" (UniqueName: \"kubernetes.io/projected/b542a9bd-d4e4-466f-9c44-154cddf848b6-kube-api-access-7f62m\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.248611 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9863fc2f-2307-434d-b806-d4afc97866a5-client-ca\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.248404 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-client-ca\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.249200 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9863fc2f-2307-434d-b806-d4afc97866a5-config\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.249264 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-utilities" (OuterVolumeSpecName: "utilities") pod "11537c3a-0298-48bc-a5f8-b79fe47c9cd5" (UID: "11537c3a-0298-48bc-a5f8-b79fe47c9cd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.250427 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-config\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.250974 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-kube-api-access-km6xd" (OuterVolumeSpecName: "kube-api-access-km6xd") pod "11537c3a-0298-48bc-a5f8-b79fe47c9cd5" (UID: "11537c3a-0298-48bc-a5f8-b79fe47c9cd5"). InnerVolumeSpecName "kube-api-access-km6xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.251244 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9863fc2f-2307-434d-b806-d4afc97866a5-client-ca\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.252467 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b542a9bd-d4e4-466f-9c44-154cddf848b6-serving-cert\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.259384 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b542a9bd-d4e4-466f-9c44-154cddf848b6-proxy-ca-bundles\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.261065 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9863fc2f-2307-434d-b806-d4afc97866a5-serving-cert\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.263176 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b2p8\" (UniqueName: \"kubernetes.io/projected/9863fc2f-2307-434d-b806-d4afc97866a5-kube-api-access-9b2p8\") pod \"route-controller-manager-855646bdb5-57psd\" (UID: \"9863fc2f-2307-434d-b806-d4afc97866a5\") " pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.263180 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f62m\" (UniqueName: \"kubernetes.io/projected/b542a9bd-d4e4-466f-9c44-154cddf848b6-kube-api-access-7f62m\") pod \"controller-manager-86957cd6cd-m2zg5\" (UID: \"b542a9bd-d4e4-466f-9c44-154cddf848b6\") " pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.270774 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11537c3a-0298-48bc-a5f8-b79fe47c9cd5" (UID: "11537c3a-0298-48bc-a5f8-b79fe47c9cd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.350708 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km6xd\" (UniqueName: \"kubernetes.io/projected/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-kube-api-access-km6xd\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.350770 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.350792 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11537c3a-0298-48bc-a5f8-b79fe47c9cd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.368296 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.385564 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.616237 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86957cd6cd-m2zg5"] Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.657287 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bada626-6ad8-4fad-8649-0b9f3497e68e" path="/var/lib/kubelet/pods/7bada626-6ad8-4fad-8649-0b9f3497e68e/volumes" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.658339 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a23007-dfab-4213-be44-cbc0ebd13e3a" path="/var/lib/kubelet/pods/97a23007-dfab-4213-be44-cbc0ebd13e3a/volumes" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.658874 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4631f59-6443-41b9-9578-6115002922bf" path="/var/lib/kubelet/pods/f4631f59-6443-41b9-9578-6115002922bf/volumes" Jan 27 17:01:01 crc kubenswrapper[5049]: W0127 17:01:01.708833 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9863fc2f_2307_434d_b806_d4afc97866a5.slice/crio-0ab0ceeabfa27bc2a2ecb95fe7334726423ba104a4068f71831879cbb928cf7c WatchSource:0}: Error finding container 0ab0ceeabfa27bc2a2ecb95fe7334726423ba104a4068f71831879cbb928cf7c: Status 404 returned error can't find the container with id 0ab0ceeabfa27bc2a2ecb95fe7334726423ba104a4068f71831879cbb928cf7c Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.710029 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd"] Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.769153 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" event={"ID":"b542a9bd-d4e4-466f-9c44-154cddf848b6","Type":"ContainerStarted","Data":"55250f773f3fcde6ede815fc14805573abdf5410f66e10eb4eb7fdee9e8be97f"} Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.770529 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" event={"ID":"9863fc2f-2307-434d-b806-d4afc97866a5","Type":"ContainerStarted","Data":"0ab0ceeabfa27bc2a2ecb95fe7334726423ba104a4068f71831879cbb928cf7c"} Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.772893 5049 generic.go:334] "Generic (PLEG): container finished" podID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerID="ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d" exitCode=0 Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.772921 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w8k7" event={"ID":"11537c3a-0298-48bc-a5f8-b79fe47c9cd5","Type":"ContainerDied","Data":"ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d"} Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.772944 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5w8k7" event={"ID":"11537c3a-0298-48bc-a5f8-b79fe47c9cd5","Type":"ContainerDied","Data":"9096b80c1c29528319b9d83bc9cc4900beeee9c51804ad63f65d5792342b981e"} Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.772967 5049 scope.go:117] "RemoveContainer" containerID="ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.773110 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5w8k7" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.796614 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5w8k7"] Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.800396 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5w8k7"] Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.800530 5049 scope.go:117] "RemoveContainer" containerID="99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.827497 5049 scope.go:117] "RemoveContainer" containerID="d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.843227 5049 scope.go:117] "RemoveContainer" containerID="ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.843722 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d\": container with ID starting with ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d not found: ID does not exist" containerID="ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.843838 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d"} err="failed to get container status \"ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d\": rpc error: code = NotFound desc = could not find container \"ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d\": container with ID starting with ccdbeac88d9a2ab6256932fc862daae5201373f071788dabf43423e13f0b6e5d not found: ID does not exist" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.843894 5049 scope.go:117] "RemoveContainer" containerID="99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.844306 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0\": container with ID starting with 99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0 not found: ID does not exist" containerID="99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.844349 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0"} err="failed to get container status \"99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0\": rpc error: code = NotFound desc = could not find container \"99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0\": container with ID starting with 99a1d99af8b8bfdd96eae97f67ec4ac3e3e865b4a69ad6c0a08931c7fee629f0 not found: ID does not exist" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.844378 5049 scope.go:117] "RemoveContainer" containerID="d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c" Jan 27 17:01:01 crc kubenswrapper[5049]: E0127 17:01:01.844728 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c\": container with ID starting with d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c not found: ID does not exist" containerID="d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c" Jan 27 17:01:01 crc kubenswrapper[5049]: I0127 17:01:01.844787 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c"} err="failed to get container status \"d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c\": rpc error: code = NotFound desc = could not find container \"d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c\": container with ID starting with d332496ac769e3ae99e8207a143b600ce3960bb7d1c27c1b0e4ff1a3ef2cab7c not found: ID does not exist" Jan 27 17:01:02 crc kubenswrapper[5049]: I0127 17:01:02.779203 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" event={"ID":"9863fc2f-2307-434d-b806-d4afc97866a5","Type":"ContainerStarted","Data":"fb287710643f4c9b5489f0822ce56b741e13f0a81d863d1f0893abff76be74a5"} Jan 27 17:01:02 crc kubenswrapper[5049]: I0127 17:01:02.779637 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:02 crc kubenswrapper[5049]: I0127 17:01:02.783442 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" event={"ID":"b542a9bd-d4e4-466f-9c44-154cddf848b6","Type":"ContainerStarted","Data":"f42adf2d6cab436e0fc7d091ced76f5769d1dbec059804ab46d9d78281ac39fb"} Jan 27 17:01:02 crc kubenswrapper[5049]: I0127 17:01:02.783699 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:02 crc kubenswrapper[5049]: I0127 17:01:02.792008 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" Jan 27 17:01:02 crc kubenswrapper[5049]: I0127 17:01:02.803003 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" podStartSLOduration=3.802965725 podStartE2EDuration="3.802965725s" podCreationTimestamp="2026-01-27 17:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:01:02.799617797 +0000 UTC m=+237.898591346" watchObservedRunningTime="2026-01-27 17:01:02.802965725 +0000 UTC m=+237.901939274" Jan 27 17:01:02 crc kubenswrapper[5049]: I0127 17:01:02.818854 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86957cd6cd-m2zg5" podStartSLOduration=3.818834199 podStartE2EDuration="3.818834199s" podCreationTimestamp="2026-01-27 17:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:01:02.816177271 +0000 UTC m=+237.915150820" watchObservedRunningTime="2026-01-27 17:01:02.818834199 +0000 UTC m=+237.917807748" Jan 27 17:01:03 crc kubenswrapper[5049]: I0127 17:01:03.027476 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-855646bdb5-57psd" Jan 27 17:01:03 crc kubenswrapper[5049]: I0127 17:01:03.656907 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" path="/var/lib/kubelet/pods/11537c3a-0298-48bc-a5f8-b79fe47c9cd5/volumes" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.825588 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" containerName="oauth-openshift" containerID="cri-o://30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c" gracePeriod=15 Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.886455 5049 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.886977 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="extract-content" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.887016 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="extract-content" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.887039 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="registry-server" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.887088 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="registry-server" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.887107 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="extract-utilities" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.887121 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="extract-utilities" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.887300 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="11537c3a-0298-48bc-a5f8-b79fe47c9cd5" containerName="registry-server" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.887925 5049 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.888119 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.888519 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d" gracePeriod=15 Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.888521 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b" gracePeriod=15 Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.888823 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321" gracePeriod=15 Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.889095 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da" gracePeriod=15 Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.889273 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba" gracePeriod=15 Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.890283 5049 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.890770 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.890840 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.890919 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.890937 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.890957 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.891014 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.891124 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.891146 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.891165 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.891228 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.891257 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.891273 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.891341 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.891358 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.891845 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.891931 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.891964 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.892041 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.892073 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.892148 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:11 crc kubenswrapper[5049]: E0127 17:01:11.892483 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.892508 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:11 crc kubenswrapper[5049]: I0127 17:01:11.892770 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.011542 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.011596 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.011685 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.011710 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.011776 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.011815 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.011839 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.011862 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: E0127 17:01:12.075661 5049 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113650 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113748 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113826 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113850 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113836 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113890 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113946 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113938 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.113970 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.114024 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.114059 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.114090 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.114123 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.114128 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.114151 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.114279 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.376445 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.379765 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.380765 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.381464 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: E0127 17:01:12.402399 5049 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea522c7bdd80d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 17:01:12.401762317 +0000 UTC m=+247.500735876,LastTimestamp:2026-01-27 17:01:12.401762317 +0000 UTC m=+247.500735876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.417506 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-router-certs\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.417558 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-serving-cert\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.417623 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-login\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.417664 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-provider-selection\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.417717 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb84g\" (UniqueName: \"kubernetes.io/projected/b7637684-717f-4bf3-bba2-cd3dec71715d-kube-api-access-mb84g\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.417749 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-session\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.418446 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-cliconfig\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.418504 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-trusted-ca-bundle\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.418574 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-policies\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.418622 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-error\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.418660 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-service-ca\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.418729 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-idp-0-file-data\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.419484 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.419898 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.420054 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.420404 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-dir\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.420492 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-ocp-branding-template\") pod \"b7637684-717f-4bf3-bba2-cd3dec71715d\" (UID: \"b7637684-717f-4bf3-bba2-cd3dec71715d\") " Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.420524 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.421096 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.422441 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7637684-717f-4bf3-bba2-cd3dec71715d-kube-api-access-mb84g" (OuterVolumeSpecName: "kube-api-access-mb84g") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "kube-api-access-mb84g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.422570 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.422664 5049 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.422728 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.422775 5049 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7637684-717f-4bf3-bba2-cd3dec71715d-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.422824 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.422845 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.423605 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.423899 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.424272 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.424453 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.424792 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.426152 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.426245 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b7637684-717f-4bf3-bba2-cd3dec71715d" (UID: "b7637684-717f-4bf3-bba2-cd3dec71715d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525174 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb84g\" (UniqueName: \"kubernetes.io/projected/b7637684-717f-4bf3-bba2-cd3dec71715d-kube-api-access-mb84g\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525218 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525235 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525249 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525262 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525275 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525289 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525301 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.525312 5049 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7637684-717f-4bf3-bba2-cd3dec71715d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.848380 5049 generic.go:334] "Generic (PLEG): container finished" podID="971a61c1-8167-465d-8012-9b19ba71bdce" containerID="80a96c596543d23358dd9cc73227f37e48c462df0500c929a75abbcfc23425bd" exitCode=0 Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.848485 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"971a61c1-8167-465d-8012-9b19ba71bdce","Type":"ContainerDied","Data":"80a96c596543d23358dd9cc73227f37e48c462df0500c929a75abbcfc23425bd"} Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.849877 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.850493 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.850992 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.852255 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed"} Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.852315 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9d1de32fa7f5923a6378925dbd399656da27c4e3e2302c036de0dffbcb3437f2"} Jan 27 17:01:12 crc kubenswrapper[5049]: E0127 17:01:12.853084 5049 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.853393 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.853934 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.854407 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.856782 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.859914 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.861110 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b" exitCode=0 Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.861149 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d" exitCode=0 Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.861166 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321" exitCode=0 Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.861182 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba" exitCode=2 Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.861200 5049 scope.go:117] "RemoveContainer" containerID="db8c3016d5abc1d920f17e35bebabb3ed9dfbbca68f6ac59db0ad43c7a21d071" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.863586 5049 generic.go:334] "Generic (PLEG): container finished" podID="b7637684-717f-4bf3-bba2-cd3dec71715d" containerID="30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c" exitCode=0 Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.863642 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" event={"ID":"b7637684-717f-4bf3-bba2-cd3dec71715d","Type":"ContainerDied","Data":"30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c"} Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.863741 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" event={"ID":"b7637684-717f-4bf3-bba2-cd3dec71715d","Type":"ContainerDied","Data":"f6604d4f49486aded8bef91938c42e5e23306c580e5e879893aaaa34fabf648c"} Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.863836 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.864740 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.865997 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.866630 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.891882 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.892589 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.893411 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.901742 5049 scope.go:117] "RemoveContainer" containerID="30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.925559 5049 scope.go:117] "RemoveContainer" containerID="30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c" Jan 27 17:01:12 crc kubenswrapper[5049]: E0127 17:01:12.926245 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c\": container with ID starting with 30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c not found: ID does not exist" containerID="30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c" Jan 27 17:01:12 crc kubenswrapper[5049]: I0127 17:01:12.926318 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c"} err="failed to get container status \"30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c\": rpc error: code = NotFound desc = could not find container \"30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c\": container with ID starting with 30b2b66094fd34b7b088861d6774ca06f9f4836b136330e6ff6c3da503a0b28c not found: ID does not exist" Jan 27 17:01:13 crc kubenswrapper[5049]: I0127 17:01:13.876883 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.339864 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.340887 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.341578 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.342112 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.342588 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.404355 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.405130 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.405312 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.405599 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.457882 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-var-lock\") pod \"971a61c1-8167-465d-8012-9b19ba71bdce\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.457958 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/971a61c1-8167-465d-8012-9b19ba71bdce-kube-api-access\") pod \"971a61c1-8167-465d-8012-9b19ba71bdce\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.457975 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.457994 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458024 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458044 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-kubelet-dir\") pod \"971a61c1-8167-465d-8012-9b19ba71bdce\" (UID: \"971a61c1-8167-465d-8012-9b19ba71bdce\") " Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458065 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458124 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458172 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458204 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "971a61c1-8167-465d-8012-9b19ba71bdce" (UID: "971a61c1-8167-465d-8012-9b19ba71bdce"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458306 5049 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458321 5049 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458330 5049 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.458337 5049 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.459060 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-var-lock" (OuterVolumeSpecName: "var-lock") pod "971a61c1-8167-465d-8012-9b19ba71bdce" (UID: "971a61c1-8167-465d-8012-9b19ba71bdce"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.463238 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971a61c1-8167-465d-8012-9b19ba71bdce-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "971a61c1-8167-465d-8012-9b19ba71bdce" (UID: "971a61c1-8167-465d-8012-9b19ba71bdce"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.559868 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/971a61c1-8167-465d-8012-9b19ba71bdce-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.559918 5049 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/971a61c1-8167-465d-8012-9b19ba71bdce-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.893237 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.894145 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da" exitCode=0 Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.894231 5049 scope.go:117] "RemoveContainer" containerID="0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.894363 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.897018 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"971a61c1-8167-465d-8012-9b19ba71bdce","Type":"ContainerDied","Data":"a8f335b4d2c29dfddc8398103d7568224548c17bf5557f3223f98e8bbdb6305f"} Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.897075 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f335b4d2c29dfddc8398103d7568224548c17bf5557f3223f98e8bbdb6305f" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.897082 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.918005 5049 scope.go:117] "RemoveContainer" containerID="9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.934566 5049 scope.go:117] "RemoveContainer" containerID="8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.936250 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.937360 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.937970 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.938636 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.939268 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.939789 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.948937 5049 scope.go:117] "RemoveContainer" containerID="c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.965343 5049 scope.go:117] "RemoveContainer" containerID="c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da" Jan 27 17:01:14 crc kubenswrapper[5049]: I0127 17:01:14.982030 5049 scope.go:117] "RemoveContainer" containerID="edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.007783 5049 scope.go:117] "RemoveContainer" containerID="0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.008380 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\": container with ID starting with 0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b not found: ID does not exist" containerID="0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.008413 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b"} err="failed to get container status \"0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\": rpc error: code = NotFound desc = could not find container \"0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b\": container with ID starting with 0901ba5b224bf8ee430b20727d4c20b05b9d47a9a349361979ded6dba77e053b not found: ID does not exist" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.008438 5049 scope.go:117] "RemoveContainer" containerID="9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.008892 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\": container with ID starting with 9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d not found: ID does not exist" containerID="9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.008914 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d"} err="failed to get container status \"9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\": rpc error: code = NotFound desc = could not find container \"9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d\": container with ID starting with 9867850efdd3c7e83c3d00ce60ab0ebb54c5e3b1bbafe19213250d505fa53e0d not found: ID does not exist" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.008931 5049 scope.go:117] "RemoveContainer" containerID="8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.009428 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\": container with ID starting with 8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321 not found: ID does not exist" containerID="8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.009522 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321"} err="failed to get container status \"8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\": rpc error: code = NotFound desc = could not find container \"8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321\": container with ID starting with 8d7df67db4bf377940e88e0b382106cb714f3187ae2ca6c76f8ea5dd1fc20321 not found: ID does not exist" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.009561 5049 scope.go:117] "RemoveContainer" containerID="c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.011137 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\": container with ID starting with c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba not found: ID does not exist" containerID="c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.011200 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba"} err="failed to get container status \"c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\": rpc error: code = NotFound desc = could not find container \"c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba\": container with ID starting with c0d59a6f4814a8072f49e844adb388025d8482ef91fbdbe823f24e03a30724ba not found: ID does not exist" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.011231 5049 scope.go:117] "RemoveContainer" containerID="c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.011662 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\": container with ID starting with c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da not found: ID does not exist" containerID="c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.011741 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da"} err="failed to get container status \"c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\": rpc error: code = NotFound desc = could not find container \"c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da\": container with ID starting with c20180f480a2aa5080589a6c84815a67ed4ab3e1447f6bc1b535f1474832d7da not found: ID does not exist" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.011770 5049 scope.go:117] "RemoveContainer" containerID="edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.012095 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\": container with ID starting with edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50 not found: ID does not exist" containerID="edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.012152 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50"} err="failed to get container status \"edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\": rpc error: code = NotFound desc = could not find container \"edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50\": container with ID starting with edca44bc65854f4f477f33f9c03c196e463f213cbc5d6eb40a5c1d854da94f50 not found: ID does not exist" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.391823 5049 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.392555 5049 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.392968 5049 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.393290 5049 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.393635 5049 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.393727 5049 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.394077 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="200ms" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.594510 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="400ms" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.647958 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.648429 5049 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.649450 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:15 crc kubenswrapper[5049]: I0127 17:01:15.657198 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 17:01:15 crc kubenswrapper[5049]: E0127 17:01:15.996025 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="800ms" Jan 27 17:01:16 crc kubenswrapper[5049]: E0127 17:01:16.797479 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="1.6s" Jan 27 17:01:18 crc kubenswrapper[5049]: E0127 17:01:18.399127 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="3.2s" Jan 27 17:01:21 crc kubenswrapper[5049]: E0127 17:01:21.600321 5049 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="6.4s" Jan 27 17:01:22 crc kubenswrapper[5049]: E0127 17:01:22.367355 5049 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ea522c7bdd80d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 17:01:12.401762317 +0000 UTC m=+247.500735876,LastTimestamp:2026-01-27 17:01:12.401762317 +0000 UTC m=+247.500735876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.646139 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.648517 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.649312 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.674618 5049 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.674701 5049 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:23 crc kubenswrapper[5049]: E0127 17:01:23.675571 5049 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.676379 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.969288 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"72d6a964e0e949875c2fd4f69e32a9719b46c586286612230a6ac6b85ba6a82b"} Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.969634 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c23cc34506af0326024567b99d56ec42faad9c3bf38669907bb2ad2e8e1ab157"} Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.970006 5049 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.970033 5049 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:23 crc kubenswrapper[5049]: E0127 17:01:23.970856 5049 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.970921 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:23 crc kubenswrapper[5049]: I0127 17:01:23.971485 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:24 crc kubenswrapper[5049]: I0127 17:01:24.979889 5049 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="72d6a964e0e949875c2fd4f69e32a9719b46c586286612230a6ac6b85ba6a82b" exitCode=0 Jan 27 17:01:24 crc kubenswrapper[5049]: I0127 17:01:24.979954 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"72d6a964e0e949875c2fd4f69e32a9719b46c586286612230a6ac6b85ba6a82b"} Jan 27 17:01:24 crc kubenswrapper[5049]: I0127 17:01:24.980388 5049 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:24 crc kubenswrapper[5049]: I0127 17:01:24.980414 5049 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:24 crc kubenswrapper[5049]: I0127 17:01:24.981039 5049 status_manager.go:851] "Failed to get status for pod" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:24 crc kubenswrapper[5049]: E0127 17:01:24.981104 5049 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:24 crc kubenswrapper[5049]: I0127 17:01:24.981551 5049 status_manager.go:851] "Failed to get status for pod" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" pod="openshift-authentication/oauth-openshift-558db77b4-tn44m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tn44m\": dial tcp 38.102.83.20:6443: connect: connection refused" Jan 27 17:01:25 crc kubenswrapper[5049]: I0127 17:01:25.988045 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7d6c208c903dd8987f70b2c8169209f34485898dae30a0c7021c9b62cb7019d5"} Jan 27 17:01:25 crc kubenswrapper[5049]: I0127 17:01:25.988437 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c3ab82da8d65cb7d4903f2def83757e1b2341f72743d775038ec7084fc263af8"} Jan 27 17:01:25 crc kubenswrapper[5049]: I0127 17:01:25.988448 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"de436f95001754a2c3b7784253d5784c11a8b12602c93c81e0e2540ebd50e3a7"} Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.996091 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.996137 5049 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9" exitCode=1 Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.996189 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9"} Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.996656 5049 scope.go:117] "RemoveContainer" containerID="ff509dceee78ca5b118a42255243d8d6a0959943f46d94379d1732a158071ba9" Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.998946 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"84ce5ab0446a490927247a545b35afaa73fccfb1f8176e67d658ab1bdddc3664"} Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.999125 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6be893304564714c5be47bece06b068e9c9b327ffb4a084fdf068ae1658739c9"} Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.999138 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.999151 5049 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:26 crc kubenswrapper[5049]: I0127 17:01:26.999170 5049 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:28 crc kubenswrapper[5049]: I0127 17:01:28.007619 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 17:01:28 crc kubenswrapper[5049]: I0127 17:01:28.007704 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a26617096cdae0b01c64820e38c3bd0ff778d0e38af7cb274809861aa9348b52"} Jan 27 17:01:28 crc kubenswrapper[5049]: I0127 17:01:28.077022 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 17:01:28 crc kubenswrapper[5049]: I0127 17:01:28.081131 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 17:01:28 crc kubenswrapper[5049]: I0127 17:01:28.676724 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:28 crc kubenswrapper[5049]: I0127 17:01:28.677159 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:28 crc kubenswrapper[5049]: I0127 17:01:28.683093 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:29 crc kubenswrapper[5049]: I0127 17:01:29.014913 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 17:01:32 crc kubenswrapper[5049]: I0127 17:01:32.007001 5049 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:32 crc kubenswrapper[5049]: I0127 17:01:32.036881 5049 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:32 crc kubenswrapper[5049]: I0127 17:01:32.036916 5049 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:32 crc kubenswrapper[5049]: I0127 17:01:32.040968 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:32 crc kubenswrapper[5049]: I0127 17:01:32.043564 5049 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1ee0e319-d8db-4025-b37d-a21fadcae41f" Jan 27 17:01:33 crc kubenswrapper[5049]: I0127 17:01:33.055247 5049 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:33 crc kubenswrapper[5049]: I0127 17:01:33.055306 5049 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="227f3d04-5eef-4098-ba74-02c5298ec452" Jan 27 17:01:35 crc kubenswrapper[5049]: I0127 17:01:35.663743 5049 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1ee0e319-d8db-4025-b37d-a21fadcae41f" Jan 27 17:01:39 crc kubenswrapper[5049]: I0127 17:01:39.199332 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 17:01:42 crc kubenswrapper[5049]: I0127 17:01:42.088523 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 17:01:42 crc kubenswrapper[5049]: I0127 17:01:42.872198 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 17:01:42 crc kubenswrapper[5049]: I0127 17:01:42.909702 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 17:01:43 crc kubenswrapper[5049]: I0127 17:01:43.001592 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 17:01:43 crc kubenswrapper[5049]: I0127 17:01:43.088961 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 17:01:43 crc kubenswrapper[5049]: I0127 17:01:43.434565 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 17:01:43 crc kubenswrapper[5049]: I0127 17:01:43.984244 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 17:01:44 crc kubenswrapper[5049]: I0127 17:01:44.103732 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 17:01:44 crc kubenswrapper[5049]: I0127 17:01:44.363960 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 17:01:44 crc kubenswrapper[5049]: I0127 17:01:44.452419 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 17:01:44 crc kubenswrapper[5049]: I0127 17:01:44.890729 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 17:01:44 crc kubenswrapper[5049]: I0127 17:01:44.898709 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 17:01:44 crc kubenswrapper[5049]: I0127 17:01:44.922107 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 17:01:44 crc kubenswrapper[5049]: I0127 17:01:44.946526 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.016337 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.144632 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.154913 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.239609 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.241440 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.277565 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.353960 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.385658 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.441138 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.552321 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.568620 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.668536 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.753553 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.846276 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.899848 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 17:01:45 crc kubenswrapper[5049]: I0127 17:01:45.982773 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.099610 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.135441 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.148024 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.358372 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.392820 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.454642 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.519114 5049 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.695833 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.753713 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.762704 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.776798 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.844917 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.864000 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.870339 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 17:01:46 crc kubenswrapper[5049]: I0127 17:01:46.976115 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.232779 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.288899 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.339459 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.391397 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.402884 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.406625 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.615976 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.672328 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.804798 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.831195 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.913383 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.981786 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.995119 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 17:01:47 crc kubenswrapper[5049]: I0127 17:01:47.999767 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.079891 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.098313 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.098541 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.102289 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.161954 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.192601 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.211861 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.217150 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.244105 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.315837 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.341781 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.365156 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.379565 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.385996 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.490823 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.596916 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.613071 5049 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.618997 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-tn44m"] Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.619089 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.623654 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.658298 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.65827188 podStartE2EDuration="16.65827188s" podCreationTimestamp="2026-01-27 17:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:01:48.651784461 +0000 UTC m=+283.750758020" watchObservedRunningTime="2026-01-27 17:01:48.65827188 +0000 UTC m=+283.757245439" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.672866 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.780637 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.803975 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.874844 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 17:01:48 crc kubenswrapper[5049]: I0127 17:01:48.998829 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.022579 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.234618 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.267307 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.333173 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.347547 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.474855 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.506583 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.534440 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.540265 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.576743 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.652360 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" path="/var/lib/kubelet/pods/b7637684-717f-4bf3-bba2-cd3dec71715d/volumes" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.661835 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.701298 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.736004 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.751956 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.820599 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.991178 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 17:01:49 crc kubenswrapper[5049]: I0127 17:01:49.998371 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.242447 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.274748 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.312095 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.349653 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.455238 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.509958 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.523431 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.589369 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.626624 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.643220 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.644224 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.661897 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.753803 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.847078 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.915197 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.915197 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.951022 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.960984 5049 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.980881 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 17:01:50 crc kubenswrapper[5049]: I0127 17:01:50.986615 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.031467 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.077972 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.232767 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.235769 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.240343 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.315003 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.329745 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.356895 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.667927 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.725773 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.782137 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.803646 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.869732 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.880169 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 17:01:51 crc kubenswrapper[5049]: I0127 17:01:51.997201 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.011998 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.029629 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.067477 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.077871 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.114136 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.259130 5049 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.334913 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.442132 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.457140 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.491264 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.605742 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.608902 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.628595 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.682889 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.685140 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.729719 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.779475 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.822024 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 17:01:52 crc kubenswrapper[5049]: I0127 17:01:52.872124 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.095882 5049 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.099500 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.160136 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.208804 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.228070 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.250539 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.329267 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.333994 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.368429 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.498906 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.522687 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.577427 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.665562 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.763891 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.779453 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.784910 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.793919 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.891887 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.918366 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.979965 5049 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 17:01:53 crc kubenswrapper[5049]: I0127 17:01:53.996098 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.099315 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.142223 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.144721 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.161748 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.176425 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.291004 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.318889 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.349469 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.412301 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.422571 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.540257 5049 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.540759 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed" gracePeriod=5 Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.578615 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.618816 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.689163 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.733520 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.764104 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.861908 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.872598 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.965687 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 17:01:54 crc kubenswrapper[5049]: I0127 17:01:54.986636 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.138530 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.345990 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.559459 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.728244 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.783858 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.820945 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.853817 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.931555 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 17:01:55 crc kubenswrapper[5049]: I0127 17:01:55.933141 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.069980 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.190547 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.198363 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.326703 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.402422 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.481385 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.501265 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.531444 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.543046 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.614080 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.812412 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.836895 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 17:01:56 crc kubenswrapper[5049]: I0127 17:01:56.848751 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.003769 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.016168 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.081980 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.153199 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.258581 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.318155 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.675095 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.683161 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.693913 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.767275 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 17:01:57 crc kubenswrapper[5049]: I0127 17:01:57.826922 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 17:01:58 crc kubenswrapper[5049]: I0127 17:01:58.052542 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 17:01:58 crc kubenswrapper[5049]: I0127 17:01:58.329892 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 17:01:58 crc kubenswrapper[5049]: I0127 17:01:58.707829 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 17:01:58 crc kubenswrapper[5049]: I0127 17:01:58.912322 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 17:01:58 crc kubenswrapper[5049]: I0127 17:01:58.996466 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.008306 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-655cc67ff8-7bhjx"] Jan 27 17:01:59 crc kubenswrapper[5049]: E0127 17:01:59.008537 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" containerName="oauth-openshift" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.008551 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" containerName="oauth-openshift" Jan 27 17:01:59 crc kubenswrapper[5049]: E0127 17:01:59.008564 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.008572 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 17:01:59 crc kubenswrapper[5049]: E0127 17:01:59.008583 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" containerName="installer" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.008590 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" containerName="installer" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.008720 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="971a61c1-8167-465d-8012-9b19ba71bdce" containerName="installer" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.008735 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.008747 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7637684-717f-4bf3-bba2-cd3dec71715d" containerName="oauth-openshift" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.009235 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.011982 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.012111 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.012400 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.012493 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.012558 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.012935 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.013009 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.013412 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.013516 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.013753 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.014608 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.017074 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.030206 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.046903 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.047569 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-655cc67ff8-7bhjx"] Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.065880 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122087 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-error\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122134 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkp54\" (UniqueName: \"kubernetes.io/projected/f81df740-3c2d-4ac3-914a-89da8c60e576-kube-api-access-wkp54\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122159 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-router-certs\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122174 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-session\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122191 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-service-ca\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122208 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-login\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122358 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f81df740-3c2d-4ac3-914a-89da8c60e576-audit-dir\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122449 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-cliconfig\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122493 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-audit-policies\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122582 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122713 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122836 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.122930 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-serving-cert\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.123038 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.137362 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.170827 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224061 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f81df740-3c2d-4ac3-914a-89da8c60e576-audit-dir\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224128 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-cliconfig\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224169 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-audit-policies\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224199 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224256 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224293 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224336 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-serving-cert\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224355 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224410 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-error\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224426 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkp54\" (UniqueName: \"kubernetes.io/projected/f81df740-3c2d-4ac3-914a-89da8c60e576-kube-api-access-wkp54\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224447 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-router-certs\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224481 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-session\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224500 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-service-ca\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224519 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-login\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.224243 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f81df740-3c2d-4ac3-914a-89da8c60e576-audit-dir\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.225798 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-audit-policies\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.226535 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-cliconfig\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.227193 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-service-ca\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.229791 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.234349 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.234576 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-serving-cert\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.234902 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-router-certs\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.237547 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.240167 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-login\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.240623 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.240699 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.251801 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-system-session\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.256191 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkp54\" (UniqueName: \"kubernetes.io/projected/f81df740-3c2d-4ac3-914a-89da8c60e576-kube-api-access-wkp54\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.256627 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f81df740-3c2d-4ac3-914a-89da8c60e576-v4-0-config-user-template-error\") pod \"oauth-openshift-655cc67ff8-7bhjx\" (UID: \"f81df740-3c2d-4ac3-914a-89da8c60e576\") " pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.321842 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.328071 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.740377 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-655cc67ff8-7bhjx"] Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.911578 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 17:01:59 crc kubenswrapper[5049]: I0127 17:01:59.950468 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.098712 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.099360 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237254 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237352 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237400 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237391 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237482 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237486 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237538 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237781 5049 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237792 5049 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237880 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.237921 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.246572 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.283590 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" event={"ID":"f81df740-3c2d-4ac3-914a-89da8c60e576","Type":"ContainerStarted","Data":"ad6e99ab829cacc3a9d647539e3a50c57905b7c2ecb29ed62901a3c658661af2"} Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.283665 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" event={"ID":"f81df740-3c2d-4ac3-914a-89da8c60e576","Type":"ContainerStarted","Data":"4fe201c5aaf5ef1540f5dda19ee4d4b490da21cc1a0c799962801609913da20c"} Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.283704 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.285024 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.285181 5049 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed" exitCode=137 Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.285308 5049 scope.go:117] "RemoveContainer" containerID="18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.285455 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.304617 5049 scope.go:117] "RemoveContainer" containerID="18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed" Jan 27 17:02:00 crc kubenswrapper[5049]: E0127 17:02:00.305255 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed\": container with ID starting with 18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed not found: ID does not exist" containerID="18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.305312 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed"} err="failed to get container status \"18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed\": rpc error: code = NotFound desc = could not find container \"18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed\": container with ID starting with 18d3fd0453cc298f1f6b654a4c9bf30d4d8f1e1700a7afdb6f219de701ed48ed not found: ID does not exist" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.339150 5049 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.339438 5049 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.339532 5049 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.486701 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" Jan 27 17:02:00 crc kubenswrapper[5049]: I0127 17:02:00.508911 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-655cc67ff8-7bhjx" podStartSLOduration=74.508888665 podStartE2EDuration="1m14.508888665s" podCreationTimestamp="2026-01-27 17:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:02:00.306001345 +0000 UTC m=+295.404974914" watchObservedRunningTime="2026-01-27 17:02:00.508888665 +0000 UTC m=+295.607862234" Jan 27 17:02:01 crc kubenswrapper[5049]: I0127 17:02:01.228341 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 17:02:01 crc kubenswrapper[5049]: I0127 17:02:01.659491 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 17:02:05 crc kubenswrapper[5049]: I0127 17:02:05.413052 5049 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 17:02:15 crc kubenswrapper[5049]: I0127 17:02:15.390284 5049 generic.go:334] "Generic (PLEG): container finished" podID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerID="ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b" exitCode=0 Jan 27 17:02:15 crc kubenswrapper[5049]: I0127 17:02:15.390875 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" event={"ID":"7e762e88-c00a-49b7-8a84-48c7fe50b602","Type":"ContainerDied","Data":"ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b"} Jan 27 17:02:15 crc kubenswrapper[5049]: I0127 17:02:15.392150 5049 scope.go:117] "RemoveContainer" containerID="ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b" Jan 27 17:02:16 crc kubenswrapper[5049]: I0127 17:02:16.398992 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" event={"ID":"7e762e88-c00a-49b7-8a84-48c7fe50b602","Type":"ContainerStarted","Data":"546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992"} Jan 27 17:02:16 crc kubenswrapper[5049]: I0127 17:02:16.399448 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 17:02:16 crc kubenswrapper[5049]: I0127 17:02:16.401269 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.647774 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgsrj"] Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.649387 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.665371 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgsrj"] Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.820986 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81db2cb3-6831-48b9-8116-9c5abb8cfca0-registry-certificates\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.821051 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpc67\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-kube-api-access-kpc67\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.821071 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81db2cb3-6831-48b9-8116-9c5abb8cfca0-trusted-ca\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.821106 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-bound-sa-token\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.821155 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81db2cb3-6831-48b9-8116-9c5abb8cfca0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.821188 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.821206 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81db2cb3-6831-48b9-8116-9c5abb8cfca0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.821233 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-registry-tls\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.851853 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.922015 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81db2cb3-6831-48b9-8116-9c5abb8cfca0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.922381 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-registry-tls\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.922419 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81db2cb3-6831-48b9-8116-9c5abb8cfca0-registry-certificates\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.922450 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpc67\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-kube-api-access-kpc67\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.922469 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81db2cb3-6831-48b9-8116-9c5abb8cfca0-trusted-ca\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.922500 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-bound-sa-token\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.922524 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81db2cb3-6831-48b9-8116-9c5abb8cfca0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.924059 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/81db2cb3-6831-48b9-8116-9c5abb8cfca0-trusted-ca\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.924118 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/81db2cb3-6831-48b9-8116-9c5abb8cfca0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.924192 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/81db2cb3-6831-48b9-8116-9c5abb8cfca0-registry-certificates\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.939967 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpc67\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-kube-api-access-kpc67\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.943152 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-bound-sa-token\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.945911 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/81db2cb3-6831-48b9-8116-9c5abb8cfca0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.947216 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/81db2cb3-6831-48b9-8116-9c5abb8cfca0-registry-tls\") pod \"image-registry-66df7c8f76-fgsrj\" (UID: \"81db2cb3-6831-48b9-8116-9c5abb8cfca0\") " pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:08 crc kubenswrapper[5049]: I0127 17:03:08.969561 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:09 crc kubenswrapper[5049]: I0127 17:03:09.404407 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fgsrj"] Jan 27 17:03:09 crc kubenswrapper[5049]: I0127 17:03:09.768000 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" event={"ID":"81db2cb3-6831-48b9-8116-9c5abb8cfca0","Type":"ContainerStarted","Data":"fad0eb7fa23256cc0f6a3ad80e9f4d28474eea34d4215f620184f1385181fe2d"} Jan 27 17:03:09 crc kubenswrapper[5049]: I0127 17:03:09.768048 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" event={"ID":"81db2cb3-6831-48b9-8116-9c5abb8cfca0","Type":"ContainerStarted","Data":"2b1f69e80118cba1d2d105c33fa5b92814208bcbf5b83c839557c53a4834ae8f"} Jan 27 17:03:09 crc kubenswrapper[5049]: I0127 17:03:09.769386 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:09 crc kubenswrapper[5049]: I0127 17:03:09.792868 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" podStartSLOduration=1.792854188 podStartE2EDuration="1.792854188s" podCreationTimestamp="2026-01-27 17:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:03:09.791279133 +0000 UTC m=+364.890252682" watchObservedRunningTime="2026-01-27 17:03:09.792854188 +0000 UTC m=+364.891827737" Jan 27 17:03:17 crc kubenswrapper[5049]: I0127 17:03:17.782040 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:03:17 crc kubenswrapper[5049]: I0127 17:03:17.783932 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:03:28 crc kubenswrapper[5049]: I0127 17:03:28.977511 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fgsrj" Jan 27 17:03:29 crc kubenswrapper[5049]: I0127 17:03:29.053056 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttw4x"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.262620 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6g667"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.263364 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6g667" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerName="registry-server" containerID="cri-o://487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76" gracePeriod=30 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.270565 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bwn2r"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.270880 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bwn2r" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerName="registry-server" containerID="cri-o://16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d" gracePeriod=30 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.284334 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7vfz"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.284747 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" containerID="cri-o://546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992" gracePeriod=30 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.299872 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjtk"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.300538 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xdjtk" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="registry-server" containerID="cri-o://b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a" gracePeriod=30 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.313412 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58k4t"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.314253 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-58k4t" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="registry-server" containerID="cri-o://3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde" gracePeriod=30 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.316523 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rdlx6"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.317301 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.327979 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rdlx6"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.429559 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ded2900-908e-416c-9028-cfb5926b7ad5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.429905 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm87x\" (UniqueName: \"kubernetes.io/projected/4ded2900-908e-416c-9028-cfb5926b7ad5-kube-api-access-dm87x\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.429939 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ded2900-908e-416c-9028-cfb5926b7ad5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.534214 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ded2900-908e-416c-9028-cfb5926b7ad5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.534279 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm87x\" (UniqueName: \"kubernetes.io/projected/4ded2900-908e-416c-9028-cfb5926b7ad5-kube-api-access-dm87x\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.534317 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ded2900-908e-416c-9028-cfb5926b7ad5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.536244 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ded2900-908e-416c-9028-cfb5926b7ad5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: E0127 17:03:38.548451 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde is running failed: container process not found" containerID="3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.549791 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4ded2900-908e-416c-9028-cfb5926b7ad5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: E0127 17:03:38.550420 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde is running failed: container process not found" containerID="3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 17:03:38 crc kubenswrapper[5049]: E0127 17:03:38.551003 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde is running failed: container process not found" containerID="3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 17:03:38 crc kubenswrapper[5049]: E0127 17:03:38.551055 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-58k4t" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="registry-server" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.553526 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm87x\" (UniqueName: \"kubernetes.io/projected/4ded2900-908e-416c-9028-cfb5926b7ad5-kube-api-access-dm87x\") pod \"marketplace-operator-79b997595-rdlx6\" (UID: \"4ded2900-908e-416c-9028-cfb5926b7ad5\") " pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.692982 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.696219 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6g667" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.706392 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.707030 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwn2r" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.711131 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.735910 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.736647 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-operator-metrics\") pod \"7e762e88-c00a-49b7-8a84-48c7fe50b602\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.736695 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpzw5\" (UniqueName: \"kubernetes.io/projected/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-kube-api-access-rpzw5\") pod \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.736721 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-utilities\") pod \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.736744 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84px8\" (UniqueName: \"kubernetes.io/projected/7e762e88-c00a-49b7-8a84-48c7fe50b602-kube-api-access-84px8\") pod \"7e762e88-c00a-49b7-8a84-48c7fe50b602\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.736765 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-catalog-content\") pod \"50c0ecb4-7212-4c52-ba39-4fb298404899\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.736790 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-trusted-ca\") pod \"7e762e88-c00a-49b7-8a84-48c7fe50b602\" (UID: \"7e762e88-c00a-49b7-8a84-48c7fe50b602\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.736824 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-catalog-content\") pod \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\" (UID: \"6841cc70-80cd-499f-a8e6-e2a9031dcbf0\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.737022 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5gg\" (UniqueName: \"kubernetes.io/projected/50c0ecb4-7212-4c52-ba39-4fb298404899-kube-api-access-pn5gg\") pod \"50c0ecb4-7212-4c52-ba39-4fb298404899\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.737052 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbdsx\" (UniqueName: \"kubernetes.io/projected/c45b66c7-0a92-456f-927a-fe596ffdedb3-kube-api-access-lbdsx\") pod \"c45b66c7-0a92-456f-927a-fe596ffdedb3\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.737068 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-utilities\") pod \"50c0ecb4-7212-4c52-ba39-4fb298404899\" (UID: \"50c0ecb4-7212-4c52-ba39-4fb298404899\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.737093 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-catalog-content\") pod \"c45b66c7-0a92-456f-927a-fe596ffdedb3\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.737112 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-utilities\") pod \"c45b66c7-0a92-456f-927a-fe596ffdedb3\" (UID: \"c45b66c7-0a92-456f-927a-fe596ffdedb3\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.738863 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-utilities" (OuterVolumeSpecName: "utilities") pod "c45b66c7-0a92-456f-927a-fe596ffdedb3" (UID: "c45b66c7-0a92-456f-927a-fe596ffdedb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.739165 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7e762e88-c00a-49b7-8a84-48c7fe50b602" (UID: "7e762e88-c00a-49b7-8a84-48c7fe50b602"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.739770 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-utilities" (OuterVolumeSpecName: "utilities") pod "6841cc70-80cd-499f-a8e6-e2a9031dcbf0" (UID: "6841cc70-80cd-499f-a8e6-e2a9031dcbf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.740319 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-utilities" (OuterVolumeSpecName: "utilities") pod "50c0ecb4-7212-4c52-ba39-4fb298404899" (UID: "50c0ecb4-7212-4c52-ba39-4fb298404899"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.741232 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45b66c7-0a92-456f-927a-fe596ffdedb3-kube-api-access-lbdsx" (OuterVolumeSpecName: "kube-api-access-lbdsx") pod "c45b66c7-0a92-456f-927a-fe596ffdedb3" (UID: "c45b66c7-0a92-456f-927a-fe596ffdedb3"). InnerVolumeSpecName "kube-api-access-lbdsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.744093 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e762e88-c00a-49b7-8a84-48c7fe50b602-kube-api-access-84px8" (OuterVolumeSpecName: "kube-api-access-84px8") pod "7e762e88-c00a-49b7-8a84-48c7fe50b602" (UID: "7e762e88-c00a-49b7-8a84-48c7fe50b602"). InnerVolumeSpecName "kube-api-access-84px8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.744166 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7e762e88-c00a-49b7-8a84-48c7fe50b602" (UID: "7e762e88-c00a-49b7-8a84-48c7fe50b602"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.750856 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-kube-api-access-rpzw5" (OuterVolumeSpecName: "kube-api-access-rpzw5") pod "6841cc70-80cd-499f-a8e6-e2a9031dcbf0" (UID: "6841cc70-80cd-499f-a8e6-e2a9031dcbf0"). InnerVolumeSpecName "kube-api-access-rpzw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.753230 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c0ecb4-7212-4c52-ba39-4fb298404899-kube-api-access-pn5gg" (OuterVolumeSpecName: "kube-api-access-pn5gg") pod "50c0ecb4-7212-4c52-ba39-4fb298404899" (UID: "50c0ecb4-7212-4c52-ba39-4fb298404899"). InnerVolumeSpecName "kube-api-access-pn5gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.763191 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c45b66c7-0a92-456f-927a-fe596ffdedb3" (UID: "c45b66c7-0a92-456f-927a-fe596ffdedb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.823329 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6841cc70-80cd-499f-a8e6-e2a9031dcbf0" (UID: "6841cc70-80cd-499f-a8e6-e2a9031dcbf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838314 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4792s\" (UniqueName: \"kubernetes.io/projected/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-kube-api-access-4792s\") pod \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838386 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-utilities\") pod \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838471 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-catalog-content\") pod \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\" (UID: \"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1\") " Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838710 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838723 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbdsx\" (UniqueName: \"kubernetes.io/projected/c45b66c7-0a92-456f-927a-fe596ffdedb3-kube-api-access-lbdsx\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838733 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838742 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c45b66c7-0a92-456f-927a-fe596ffdedb3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838770 5049 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838783 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpzw5\" (UniqueName: \"kubernetes.io/projected/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-kube-api-access-rpzw5\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838793 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838801 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84px8\" (UniqueName: \"kubernetes.io/projected/7e762e88-c00a-49b7-8a84-48c7fe50b602-kube-api-access-84px8\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838811 5049 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e762e88-c00a-49b7-8a84-48c7fe50b602-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838819 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6841cc70-80cd-499f-a8e6-e2a9031dcbf0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.838827 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5gg\" (UniqueName: \"kubernetes.io/projected/50c0ecb4-7212-4c52-ba39-4fb298404899-kube-api-access-pn5gg\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.840783 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-utilities" (OuterVolumeSpecName: "utilities") pod "bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" (UID: "bb9802f7-71e8-4b07-a308-7a0fe06aa4b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.843006 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-kube-api-access-4792s" (OuterVolumeSpecName: "kube-api-access-4792s") pod "bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" (UID: "bb9802f7-71e8-4b07-a308-7a0fe06aa4b1"). InnerVolumeSpecName "kube-api-access-4792s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.846431 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50c0ecb4-7212-4c52-ba39-4fb298404899" (UID: "50c0ecb4-7212-4c52-ba39-4fb298404899"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.937824 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rdlx6"] Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.940477 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.940501 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c0ecb4-7212-4c52-ba39-4fb298404899-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.940511 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4792s\" (UniqueName: \"kubernetes.io/projected/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-kube-api-access-4792s\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.951935 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" event={"ID":"4ded2900-908e-416c-9028-cfb5926b7ad5","Type":"ContainerStarted","Data":"441bef1fb58f0ed8798bc7ab106728de8cbdbb8990fd1ace26e947c0394b75cb"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.954037 5049 generic.go:334] "Generic (PLEG): container finished" podID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerID="487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76" exitCode=0 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.954093 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g667" event={"ID":"50c0ecb4-7212-4c52-ba39-4fb298404899","Type":"ContainerDied","Data":"487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.954113 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6g667" event={"ID":"50c0ecb4-7212-4c52-ba39-4fb298404899","Type":"ContainerDied","Data":"3bbce7d122156226a8ed5bf093fd5cb54039606b643f7045caaa53c4a4cab464"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.954133 5049 scope.go:117] "RemoveContainer" containerID="487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.954254 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6g667" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.957590 5049 generic.go:334] "Generic (PLEG): container finished" podID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerID="3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde" exitCode=0 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.957639 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58k4t" event={"ID":"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1","Type":"ContainerDied","Data":"3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.957655 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58k4t" event={"ID":"bb9802f7-71e8-4b07-a308-7a0fe06aa4b1","Type":"ContainerDied","Data":"51c4563b5a817f68cf78a14714acf2b1c4f981c3965a2df4fdc05bc4dd3df2eb"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.957697 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58k4t" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.959525 5049 generic.go:334] "Generic (PLEG): container finished" podID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerID="16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d" exitCode=0 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.959607 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwn2r" event={"ID":"6841cc70-80cd-499f-a8e6-e2a9031dcbf0","Type":"ContainerDied","Data":"16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.959647 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bwn2r" event={"ID":"6841cc70-80cd-499f-a8e6-e2a9031dcbf0","Type":"ContainerDied","Data":"693d099465d796041e99179e9e80ae0b38e168c8caedd0e10121d492864387f1"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.959854 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bwn2r" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.964275 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.964350 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" event={"ID":"7e762e88-c00a-49b7-8a84-48c7fe50b602","Type":"ContainerDied","Data":"546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.964444 5049 generic.go:334] "Generic (PLEG): container finished" podID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerID="546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992" exitCode=0 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.964524 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q7vfz" event={"ID":"7e762e88-c00a-49b7-8a84-48c7fe50b602","Type":"ContainerDied","Data":"a9981277bfbec9aa20881abd6ef5a268987f673773727be3ea777e0cd7e5306b"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.970046 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" (UID: "bb9802f7-71e8-4b07-a308-7a0fe06aa4b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.988351 5049 scope.go:117] "RemoveContainer" containerID="11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f" Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.993809 5049 generic.go:334] "Generic (PLEG): container finished" podID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerID="b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a" exitCode=0 Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.993878 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjtk" event={"ID":"c45b66c7-0a92-456f-927a-fe596ffdedb3","Type":"ContainerDied","Data":"b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.993909 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xdjtk" event={"ID":"c45b66c7-0a92-456f-927a-fe596ffdedb3","Type":"ContainerDied","Data":"e990fc4e48187a7f468facda08734c89c4315769ac171e5a29cdfc977e1dc89b"} Jan 27 17:03:38 crc kubenswrapper[5049]: I0127 17:03:38.994065 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xdjtk" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.012141 5049 scope.go:117] "RemoveContainer" containerID="dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.029478 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6g667"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.038779 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6g667"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.039460 5049 scope.go:117] "RemoveContainer" containerID="487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.040398 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76\": container with ID starting with 487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76 not found: ID does not exist" containerID="487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.040474 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76"} err="failed to get container status \"487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76\": rpc error: code = NotFound desc = could not find container \"487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76\": container with ID starting with 487962a73cc4113fa682ba4bfab96f7f53a42221fbf2632d755efe4667b4ad76 not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.040509 5049 scope.go:117] "RemoveContainer" containerID="11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.041001 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f\": container with ID starting with 11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f not found: ID does not exist" containerID="11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.041045 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f"} err="failed to get container status \"11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f\": rpc error: code = NotFound desc = could not find container \"11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f\": container with ID starting with 11af5bfa8e2c64cb051a4c535f107b4eef92d836ca301b3a77a6a50ead83587f not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.041075 5049 scope.go:117] "RemoveContainer" containerID="dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.041365 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9\": container with ID starting with dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9 not found: ID does not exist" containerID="dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.041405 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9"} err="failed to get container status \"dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9\": rpc error: code = NotFound desc = could not find container \"dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9\": container with ID starting with dbf74617e714e5aa251a6f6c50bd959ee11a0abc517bfbf34e37f09c6269caf9 not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.041428 5049 scope.go:117] "RemoveContainer" containerID="3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.042079 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.043388 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bwn2r"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.056519 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bwn2r"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.065105 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7vfz"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.067012 5049 scope.go:117] "RemoveContainer" containerID="26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.069418 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q7vfz"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.073776 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjtk"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.077404 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xdjtk"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.082391 5049 scope.go:117] "RemoveContainer" containerID="a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.097491 5049 scope.go:117] "RemoveContainer" containerID="3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.097950 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde\": container with ID starting with 3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde not found: ID does not exist" containerID="3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.097999 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde"} err="failed to get container status \"3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde\": rpc error: code = NotFound desc = could not find container \"3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde\": container with ID starting with 3b318e9fa2b1a0ffa89bf347dba6272b08d3eb732d813c549c4fbb669a6f2dde not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.098033 5049 scope.go:117] "RemoveContainer" containerID="26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.098417 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6\": container with ID starting with 26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6 not found: ID does not exist" containerID="26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.098454 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6"} err="failed to get container status \"26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6\": rpc error: code = NotFound desc = could not find container \"26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6\": container with ID starting with 26bc991d2fa686238778ec1da54cd52181cd5fd695198c5d045fa3d2771b55d6 not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.098484 5049 scope.go:117] "RemoveContainer" containerID="a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.098831 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228\": container with ID starting with a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228 not found: ID does not exist" containerID="a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.098905 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228"} err="failed to get container status \"a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228\": rpc error: code = NotFound desc = could not find container \"a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228\": container with ID starting with a0b4dd238421b25856c634a02202843c93b564a30735bf234ef36f7d008cd228 not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.098965 5049 scope.go:117] "RemoveContainer" containerID="16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.114175 5049 scope.go:117] "RemoveContainer" containerID="699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.130727 5049 scope.go:117] "RemoveContainer" containerID="b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.150393 5049 scope.go:117] "RemoveContainer" containerID="16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.150980 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d\": container with ID starting with 16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d not found: ID does not exist" containerID="16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.151017 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d"} err="failed to get container status \"16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d\": rpc error: code = NotFound desc = could not find container \"16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d\": container with ID starting with 16c92a25fb92ad0237156f4484509c4f6a635732698c598a7cc6609344ee9a9d not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.151044 5049 scope.go:117] "RemoveContainer" containerID="699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.151419 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813\": container with ID starting with 699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813 not found: ID does not exist" containerID="699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.151454 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813"} err="failed to get container status \"699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813\": rpc error: code = NotFound desc = could not find container \"699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813\": container with ID starting with 699f2402c121c55d87cfab5bb9895503a05d438024f60e8cb698bdad19396813 not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.151484 5049 scope.go:117] "RemoveContainer" containerID="b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.151741 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9\": container with ID starting with b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9 not found: ID does not exist" containerID="b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.151758 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9"} err="failed to get container status \"b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9\": rpc error: code = NotFound desc = could not find container \"b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9\": container with ID starting with b501daacffd09c942778bd341494ce1aa0f0c8b69f62b4d3a2c20f881518b6b9 not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.151773 5049 scope.go:117] "RemoveContainer" containerID="546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.162395 5049 scope.go:117] "RemoveContainer" containerID="ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.173843 5049 scope.go:117] "RemoveContainer" containerID="546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.174350 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992\": container with ID starting with 546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992 not found: ID does not exist" containerID="546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.174380 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992"} err="failed to get container status \"546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992\": rpc error: code = NotFound desc = could not find container \"546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992\": container with ID starting with 546c049391ea1e5ca260c7fe1a33952af9fc6ebcae8136a0d6dd7c863647a992 not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.174402 5049 scope.go:117] "RemoveContainer" containerID="ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.174665 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b\": container with ID starting with ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b not found: ID does not exist" containerID="ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.174712 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b"} err="failed to get container status \"ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b\": rpc error: code = NotFound desc = could not find container \"ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b\": container with ID starting with ea11c626d2afb5b33c9cdfe25b3939d0efb232227a32ef63c4a304b26ec1ab2b not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.174731 5049 scope.go:117] "RemoveContainer" containerID="b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.185502 5049 scope.go:117] "RemoveContainer" containerID="f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.196863 5049 scope.go:117] "RemoveContainer" containerID="d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.210028 5049 scope.go:117] "RemoveContainer" containerID="b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.210383 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a\": container with ID starting with b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a not found: ID does not exist" containerID="b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.210425 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a"} err="failed to get container status \"b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a\": rpc error: code = NotFound desc = could not find container \"b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a\": container with ID starting with b960d99204142b0cf2a2955b0503256d1dee0ac041ceef049f477a091614042a not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.210453 5049 scope.go:117] "RemoveContainer" containerID="f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.210897 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e\": container with ID starting with f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e not found: ID does not exist" containerID="f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.210927 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e"} err="failed to get container status \"f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e\": rpc error: code = NotFound desc = could not find container \"f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e\": container with ID starting with f4b375fcb0f20e0ce464fe84d1191542c3578de2daf283448e9e4f851ce5ea8e not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.210949 5049 scope.go:117] "RemoveContainer" containerID="d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22" Jan 27 17:03:39 crc kubenswrapper[5049]: E0127 17:03:39.211171 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22\": container with ID starting with d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22 not found: ID does not exist" containerID="d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.211197 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22"} err="failed to get container status \"d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22\": rpc error: code = NotFound desc = could not find container \"d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22\": container with ID starting with d984fd5b6431eab903e14083a94721b8b17273c7acbf481a36b2f1a77edcfc22 not found: ID does not exist" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.282707 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58k4t"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.294359 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-58k4t"] Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.657199 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" path="/var/lib/kubelet/pods/50c0ecb4-7212-4c52-ba39-4fb298404899/volumes" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.658354 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" path="/var/lib/kubelet/pods/6841cc70-80cd-499f-a8e6-e2a9031dcbf0/volumes" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.659356 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" path="/var/lib/kubelet/pods/7e762e88-c00a-49b7-8a84-48c7fe50b602/volumes" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.660982 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" path="/var/lib/kubelet/pods/bb9802f7-71e8-4b07-a308-7a0fe06aa4b1/volumes" Jan 27 17:03:39 crc kubenswrapper[5049]: I0127 17:03:39.661756 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" path="/var/lib/kubelet/pods/c45b66c7-0a92-456f-927a-fe596ffdedb3/volumes" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.003114 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" event={"ID":"4ded2900-908e-416c-9028-cfb5926b7ad5","Type":"ContainerStarted","Data":"05d58b452b511f8f86c76eaf31a72b321d789b5537d84b84ffaaf46cbc21afbe"} Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.003462 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.005332 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.022508 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rdlx6" podStartSLOduration=2.022484732 podStartE2EDuration="2.022484732s" podCreationTimestamp="2026-01-27 17:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:03:40.016298188 +0000 UTC m=+395.115271747" watchObservedRunningTime="2026-01-27 17:03:40.022484732 +0000 UTC m=+395.121458301" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.472792 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwv44"] Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.472986 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.472999 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473013 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473019 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473028 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerName="extract-content" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473033 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerName="extract-content" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473041 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerName="extract-content" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473071 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerName="extract-content" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473079 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473085 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473094 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="extract-content" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473100 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="extract-content" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473105 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerName="extract-utilities" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473112 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerName="extract-utilities" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473119 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473125 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473133 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="extract-utilities" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473139 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="extract-utilities" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473148 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473154 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473164 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerName="extract-utilities" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473169 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerName="extract-utilities" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473177 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="extract-content" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473183 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="extract-content" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473190 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="extract-utilities" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473195 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="extract-utilities" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473276 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45b66c7-0a92-456f-927a-fe596ffdedb3" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473290 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9802f7-71e8-4b07-a308-7a0fe06aa4b1" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473297 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473305 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c0ecb4-7212-4c52-ba39-4fb298404899" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473312 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6841cc70-80cd-499f-a8e6-e2a9031dcbf0" containerName="registry-server" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473320 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" Jan 27 17:03:40 crc kubenswrapper[5049]: E0127 17:03:40.473401 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.473408 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e762e88-c00a-49b7-8a84-48c7fe50b602" containerName="marketplace-operator" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.474027 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.476929 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.491002 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwv44"] Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.559649 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466244fe-7514-4967-90d8-2c722168a89f-utilities\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.559715 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsrr5\" (UniqueName: \"kubernetes.io/projected/466244fe-7514-4967-90d8-2c722168a89f-kube-api-access-hsrr5\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.559748 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466244fe-7514-4967-90d8-2c722168a89f-catalog-content\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.660931 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466244fe-7514-4967-90d8-2c722168a89f-utilities\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.661029 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsrr5\" (UniqueName: \"kubernetes.io/projected/466244fe-7514-4967-90d8-2c722168a89f-kube-api-access-hsrr5\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.661067 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466244fe-7514-4967-90d8-2c722168a89f-catalog-content\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.661547 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466244fe-7514-4967-90d8-2c722168a89f-utilities\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.661597 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466244fe-7514-4967-90d8-2c722168a89f-catalog-content\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.670752 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z4czz"] Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.672392 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.686043 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z4czz"] Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.697374 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsrr5\" (UniqueName: \"kubernetes.io/projected/466244fe-7514-4967-90d8-2c722168a89f-kube-api-access-hsrr5\") pod \"certified-operators-kwv44\" (UID: \"466244fe-7514-4967-90d8-2c722168a89f\") " pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.733871 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.763434 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63202912-fe76-4c4c-84c6-2f8073537a86-utilities\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.763506 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfkpk\" (UniqueName: \"kubernetes.io/projected/63202912-fe76-4c4c-84c6-2f8073537a86-kube-api-access-dfkpk\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.763529 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63202912-fe76-4c4c-84c6-2f8073537a86-catalog-content\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.791728 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.867290 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfkpk\" (UniqueName: \"kubernetes.io/projected/63202912-fe76-4c4c-84c6-2f8073537a86-kube-api-access-dfkpk\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.867366 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63202912-fe76-4c4c-84c6-2f8073537a86-catalog-content\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.867447 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63202912-fe76-4c4c-84c6-2f8073537a86-utilities\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.868034 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63202912-fe76-4c4c-84c6-2f8073537a86-utilities\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.868036 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63202912-fe76-4c4c-84c6-2f8073537a86-catalog-content\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:40 crc kubenswrapper[5049]: I0127 17:03:40.910698 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfkpk\" (UniqueName: \"kubernetes.io/projected/63202912-fe76-4c4c-84c6-2f8073537a86-kube-api-access-dfkpk\") pod \"redhat-marketplace-z4czz\" (UID: \"63202912-fe76-4c4c-84c6-2f8073537a86\") " pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:41 crc kubenswrapper[5049]: I0127 17:03:41.021403 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwv44"] Jan 27 17:03:41 crc kubenswrapper[5049]: W0127 17:03:41.024768 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod466244fe_7514_4967_90d8_2c722168a89f.slice/crio-dd3f4161a6c1ae73f4a39b0cc66146f162ef8d6037f1de448a9674f25296b764 WatchSource:0}: Error finding container dd3f4161a6c1ae73f4a39b0cc66146f162ef8d6037f1de448a9674f25296b764: Status 404 returned error can't find the container with id dd3f4161a6c1ae73f4a39b0cc66146f162ef8d6037f1de448a9674f25296b764 Jan 27 17:03:41 crc kubenswrapper[5049]: I0127 17:03:41.047937 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:41 crc kubenswrapper[5049]: I0127 17:03:41.446217 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z4czz"] Jan 27 17:03:41 crc kubenswrapper[5049]: W0127 17:03:41.454399 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63202912_fe76_4c4c_84c6_2f8073537a86.slice/crio-cd1231d0eb668fd7f184562879df5c2dc125f95a120b2cf49cdf5e66b2619805 WatchSource:0}: Error finding container cd1231d0eb668fd7f184562879df5c2dc125f95a120b2cf49cdf5e66b2619805: Status 404 returned error can't find the container with id cd1231d0eb668fd7f184562879df5c2dc125f95a120b2cf49cdf5e66b2619805 Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.015732 5049 generic.go:334] "Generic (PLEG): container finished" podID="63202912-fe76-4c4c-84c6-2f8073537a86" containerID="5ef8f630c34f85672dcd8b283a54eac26f7121a373e30ac5ad887f493817aca7" exitCode=0 Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.015822 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4czz" event={"ID":"63202912-fe76-4c4c-84c6-2f8073537a86","Type":"ContainerDied","Data":"5ef8f630c34f85672dcd8b283a54eac26f7121a373e30ac5ad887f493817aca7"} Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.016015 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4czz" event={"ID":"63202912-fe76-4c4c-84c6-2f8073537a86","Type":"ContainerStarted","Data":"cd1231d0eb668fd7f184562879df5c2dc125f95a120b2cf49cdf5e66b2619805"} Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.017372 5049 generic.go:334] "Generic (PLEG): container finished" podID="466244fe-7514-4967-90d8-2c722168a89f" containerID="a184380a163028cabb5dc748a35e1a020bf5a6153faed32e207973a43d6c992f" exitCode=0 Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.017627 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwv44" event={"ID":"466244fe-7514-4967-90d8-2c722168a89f","Type":"ContainerDied","Data":"a184380a163028cabb5dc748a35e1a020bf5a6153faed32e207973a43d6c992f"} Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.017703 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwv44" event={"ID":"466244fe-7514-4967-90d8-2c722168a89f","Type":"ContainerStarted","Data":"dd3f4161a6c1ae73f4a39b0cc66146f162ef8d6037f1de448a9674f25296b764"} Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.880418 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r4psp"] Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.883944 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.884287 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4psp"] Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.906114 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.995436 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcs6p\" (UniqueName: \"kubernetes.io/projected/58dd2ae5-2dc6-4602-a286-28c5809cc910-kube-api-access-hcs6p\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.995512 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-utilities\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:42 crc kubenswrapper[5049]: I0127 17:03:42.995600 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-catalog-content\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.068023 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pdwcs"] Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.069263 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.076647 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.080078 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdwcs"] Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.098238 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-utilities\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.098305 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmhlm\" (UniqueName: \"kubernetes.io/projected/cea0ebf5-d5b2-4215-923e-df9a49b83828-kube-api-access-qmhlm\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.098348 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-catalog-content\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.098416 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcs6p\" (UniqueName: \"kubernetes.io/projected/58dd2ae5-2dc6-4602-a286-28c5809cc910-kube-api-access-hcs6p\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.098443 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-catalog-content\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.098477 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-utilities\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.099282 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-utilities\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.100145 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-catalog-content\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.123257 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcs6p\" (UniqueName: \"kubernetes.io/projected/58dd2ae5-2dc6-4602-a286-28c5809cc910-kube-api-access-hcs6p\") pod \"redhat-operators-r4psp\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.200398 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-catalog-content\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.200928 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-utilities\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.200961 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmhlm\" (UniqueName: \"kubernetes.io/projected/cea0ebf5-d5b2-4215-923e-df9a49b83828-kube-api-access-qmhlm\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.202127 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-catalog-content\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.202586 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-utilities\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.216967 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmhlm\" (UniqueName: \"kubernetes.io/projected/cea0ebf5-d5b2-4215-923e-df9a49b83828-kube-api-access-qmhlm\") pod \"community-operators-pdwcs\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.280725 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.446428 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.460739 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r4psp"] Jan 27 17:03:43 crc kubenswrapper[5049]: W0127 17:03:43.465019 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58dd2ae5_2dc6_4602_a286_28c5809cc910.slice/crio-75e1e31b3a42618cc10bfd9d65b77794cd4ede379ee4135898d6b5e1fb05f301 WatchSource:0}: Error finding container 75e1e31b3a42618cc10bfd9d65b77794cd4ede379ee4135898d6b5e1fb05f301: Status 404 returned error can't find the container with id 75e1e31b3a42618cc10bfd9d65b77794cd4ede379ee4135898d6b5e1fb05f301 Jan 27 17:03:43 crc kubenswrapper[5049]: I0127 17:03:43.624188 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pdwcs"] Jan 27 17:03:43 crc kubenswrapper[5049]: W0127 17:03:43.653219 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcea0ebf5_d5b2_4215_923e_df9a49b83828.slice/crio-88b54223fe0346a8d5564888a1c35ebf9f68affcc729803567e8eefdbca56efc WatchSource:0}: Error finding container 88b54223fe0346a8d5564888a1c35ebf9f68affcc729803567e8eefdbca56efc: Status 404 returned error can't find the container with id 88b54223fe0346a8d5564888a1c35ebf9f68affcc729803567e8eefdbca56efc Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.031834 5049 generic.go:334] "Generic (PLEG): container finished" podID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerID="a1060f20b2f2c0a6bdc445e4bb91fe41edf33f749bf966ea209534b6941bdb25" exitCode=0 Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.032041 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4psp" event={"ID":"58dd2ae5-2dc6-4602-a286-28c5809cc910","Type":"ContainerDied","Data":"a1060f20b2f2c0a6bdc445e4bb91fe41edf33f749bf966ea209534b6941bdb25"} Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.032254 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4psp" event={"ID":"58dd2ae5-2dc6-4602-a286-28c5809cc910","Type":"ContainerStarted","Data":"75e1e31b3a42618cc10bfd9d65b77794cd4ede379ee4135898d6b5e1fb05f301"} Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.036327 5049 generic.go:334] "Generic (PLEG): container finished" podID="466244fe-7514-4967-90d8-2c722168a89f" containerID="3b523b03ef7a84e90bd7f6d7520ed5d68144478067ab341c93c4cd85d9d8b7f9" exitCode=0 Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.036413 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwv44" event={"ID":"466244fe-7514-4967-90d8-2c722168a89f","Type":"ContainerDied","Data":"3b523b03ef7a84e90bd7f6d7520ed5d68144478067ab341c93c4cd85d9d8b7f9"} Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.038177 5049 generic.go:334] "Generic (PLEG): container finished" podID="63202912-fe76-4c4c-84c6-2f8073537a86" containerID="f3496829ebf2f1848b99a38b62d152fa238e3c5065647e939fbf67c7d3e734df" exitCode=0 Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.038440 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4czz" event={"ID":"63202912-fe76-4c4c-84c6-2f8073537a86","Type":"ContainerDied","Data":"f3496829ebf2f1848b99a38b62d152fa238e3c5065647e939fbf67c7d3e734df"} Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.043405 5049 generic.go:334] "Generic (PLEG): container finished" podID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerID="47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15" exitCode=0 Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.043465 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdwcs" event={"ID":"cea0ebf5-d5b2-4215-923e-df9a49b83828","Type":"ContainerDied","Data":"47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15"} Jan 27 17:03:44 crc kubenswrapper[5049]: I0127 17:03:44.043517 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdwcs" event={"ID":"cea0ebf5-d5b2-4215-923e-df9a49b83828","Type":"ContainerStarted","Data":"88b54223fe0346a8d5564888a1c35ebf9f68affcc729803567e8eefdbca56efc"} Jan 27 17:03:45 crc kubenswrapper[5049]: I0127 17:03:45.051041 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwv44" event={"ID":"466244fe-7514-4967-90d8-2c722168a89f","Type":"ContainerStarted","Data":"7c058d6c0cde65c611f75d25a2d39e09dee3abc4f8fee044902675e413f3dd01"} Jan 27 17:03:45 crc kubenswrapper[5049]: I0127 17:03:45.053601 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z4czz" event={"ID":"63202912-fe76-4c4c-84c6-2f8073537a86","Type":"ContainerStarted","Data":"5c197f2f4bb2355fa0039a47210faa528efaf5fee80911f3888343f9b923af22"} Jan 27 17:03:45 crc kubenswrapper[5049]: I0127 17:03:45.058099 5049 generic.go:334] "Generic (PLEG): container finished" podID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerID="80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8" exitCode=0 Jan 27 17:03:45 crc kubenswrapper[5049]: I0127 17:03:45.058136 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdwcs" event={"ID":"cea0ebf5-d5b2-4215-923e-df9a49b83828","Type":"ContainerDied","Data":"80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8"} Jan 27 17:03:45 crc kubenswrapper[5049]: I0127 17:03:45.060303 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4psp" event={"ID":"58dd2ae5-2dc6-4602-a286-28c5809cc910","Type":"ContainerStarted","Data":"b5982f62a9a3a063641e537e7020c7c9b5a450557cb0cf808c2ad981bb83afd7"} Jan 27 17:03:45 crc kubenswrapper[5049]: I0127 17:03:45.067747 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwv44" podStartSLOduration=2.596762448 podStartE2EDuration="5.067723307s" podCreationTimestamp="2026-01-27 17:03:40 +0000 UTC" firstStartedPulling="2026-01-27 17:03:42.019157357 +0000 UTC m=+397.118130946" lastFinishedPulling="2026-01-27 17:03:44.490118256 +0000 UTC m=+399.589091805" observedRunningTime="2026-01-27 17:03:45.066336958 +0000 UTC m=+400.165310527" watchObservedRunningTime="2026-01-27 17:03:45.067723307 +0000 UTC m=+400.166696866" Jan 27 17:03:45 crc kubenswrapper[5049]: I0127 17:03:45.129502 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z4czz" podStartSLOduration=2.696326731 podStartE2EDuration="5.129476125s" podCreationTimestamp="2026-01-27 17:03:40 +0000 UTC" firstStartedPulling="2026-01-27 17:03:42.019192888 +0000 UTC m=+397.118166477" lastFinishedPulling="2026-01-27 17:03:44.452342322 +0000 UTC m=+399.551315871" observedRunningTime="2026-01-27 17:03:45.128759875 +0000 UTC m=+400.227733454" watchObservedRunningTime="2026-01-27 17:03:45.129476125 +0000 UTC m=+400.228449684" Jan 27 17:03:46 crc kubenswrapper[5049]: I0127 17:03:46.071611 5049 generic.go:334] "Generic (PLEG): container finished" podID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerID="b5982f62a9a3a063641e537e7020c7c9b5a450557cb0cf808c2ad981bb83afd7" exitCode=0 Jan 27 17:03:46 crc kubenswrapper[5049]: I0127 17:03:46.072644 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4psp" event={"ID":"58dd2ae5-2dc6-4602-a286-28c5809cc910","Type":"ContainerDied","Data":"b5982f62a9a3a063641e537e7020c7c9b5a450557cb0cf808c2ad981bb83afd7"} Jan 27 17:03:46 crc kubenswrapper[5049]: I0127 17:03:46.076576 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdwcs" event={"ID":"cea0ebf5-d5b2-4215-923e-df9a49b83828","Type":"ContainerStarted","Data":"40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986"} Jan 27 17:03:46 crc kubenswrapper[5049]: I0127 17:03:46.114151 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pdwcs" podStartSLOduration=1.5498696079999998 podStartE2EDuration="3.114130868s" podCreationTimestamp="2026-01-27 17:03:43 +0000 UTC" firstStartedPulling="2026-01-27 17:03:44.046977479 +0000 UTC m=+399.145951068" lastFinishedPulling="2026-01-27 17:03:45.611238739 +0000 UTC m=+400.710212328" observedRunningTime="2026-01-27 17:03:46.111756011 +0000 UTC m=+401.210729550" watchObservedRunningTime="2026-01-27 17:03:46.114130868 +0000 UTC m=+401.213104417" Jan 27 17:03:47 crc kubenswrapper[5049]: I0127 17:03:47.083833 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4psp" event={"ID":"58dd2ae5-2dc6-4602-a286-28c5809cc910","Type":"ContainerStarted","Data":"cb6641bc1d9a34af89d0a1ebf9e91185fe8e9d142d0096150ccc989cc5dcda61"} Jan 27 17:03:47 crc kubenswrapper[5049]: I0127 17:03:47.107856 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r4psp" podStartSLOduration=2.637498655 podStartE2EDuration="5.107835165s" podCreationTimestamp="2026-01-27 17:03:42 +0000 UTC" firstStartedPulling="2026-01-27 17:03:44.033327545 +0000 UTC m=+399.132301094" lastFinishedPulling="2026-01-27 17:03:46.503664055 +0000 UTC m=+401.602637604" observedRunningTime="2026-01-27 17:03:47.105614053 +0000 UTC m=+402.204587612" watchObservedRunningTime="2026-01-27 17:03:47.107835165 +0000 UTC m=+402.206808724" Jan 27 17:03:47 crc kubenswrapper[5049]: I0127 17:03:47.781067 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:03:47 crc kubenswrapper[5049]: I0127 17:03:47.781313 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:03:50 crc kubenswrapper[5049]: I0127 17:03:50.792309 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:50 crc kubenswrapper[5049]: I0127 17:03:50.792953 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:50 crc kubenswrapper[5049]: I0127 17:03:50.846975 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:51 crc kubenswrapper[5049]: I0127 17:03:51.049085 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:51 crc kubenswrapper[5049]: I0127 17:03:51.049145 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:51 crc kubenswrapper[5049]: I0127 17:03:51.100287 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:51 crc kubenswrapper[5049]: I0127 17:03:51.351395 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z4czz" Jan 27 17:03:51 crc kubenswrapper[5049]: I0127 17:03:51.360912 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwv44" Jan 27 17:03:53 crc kubenswrapper[5049]: I0127 17:03:53.281259 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:53 crc kubenswrapper[5049]: I0127 17:03:53.281574 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:53 crc kubenswrapper[5049]: I0127 17:03:53.325215 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:53 crc kubenswrapper[5049]: I0127 17:03:53.388359 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:03:53 crc kubenswrapper[5049]: I0127 17:03:53.447315 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:53 crc kubenswrapper[5049]: I0127 17:03:53.447373 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:53 crc kubenswrapper[5049]: I0127 17:03:53.489474 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:54 crc kubenswrapper[5049]: I0127 17:03:54.105431 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" podUID="96e75cde-66e8-4ab2-b715-3b07b34bc3a1" containerName="registry" containerID="cri-o://c028e7ef21355a533b551c7c94cc7e6dbf79efbe2cb2407698bea42f24902281" gracePeriod=30 Jan 27 17:03:54 crc kubenswrapper[5049]: I0127 17:03:54.351976 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.315127 5049 generic.go:334] "Generic (PLEG): container finished" podID="96e75cde-66e8-4ab2-b715-3b07b34bc3a1" containerID="c028e7ef21355a533b551c7c94cc7e6dbf79efbe2cb2407698bea42f24902281" exitCode=0 Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.315209 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" event={"ID":"96e75cde-66e8-4ab2-b715-3b07b34bc3a1","Type":"ContainerDied","Data":"c028e7ef21355a533b551c7c94cc7e6dbf79efbe2cb2407698bea42f24902281"} Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.638962 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.767820 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-certificates\") pod \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.767946 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-tls\") pod \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.767970 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-bound-sa-token\") pod \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.768082 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.768121 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-ca-trust-extracted\") pod \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.768137 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2fcf\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-kube-api-access-s2fcf\") pod \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.768173 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-installation-pull-secrets\") pod \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.768201 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-trusted-ca\") pod \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\" (UID: \"96e75cde-66e8-4ab2-b715-3b07b34bc3a1\") " Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.768927 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "96e75cde-66e8-4ab2-b715-3b07b34bc3a1" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.768994 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "96e75cde-66e8-4ab2-b715-3b07b34bc3a1" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.773725 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "96e75cde-66e8-4ab2-b715-3b07b34bc3a1" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.774399 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-kube-api-access-s2fcf" (OuterVolumeSpecName: "kube-api-access-s2fcf") pod "96e75cde-66e8-4ab2-b715-3b07b34bc3a1" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1"). InnerVolumeSpecName "kube-api-access-s2fcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.777963 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "96e75cde-66e8-4ab2-b715-3b07b34bc3a1" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.778845 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "96e75cde-66e8-4ab2-b715-3b07b34bc3a1" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.779406 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "96e75cde-66e8-4ab2-b715-3b07b34bc3a1" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.785225 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "96e75cde-66e8-4ab2-b715-3b07b34bc3a1" (UID: "96e75cde-66e8-4ab2-b715-3b07b34bc3a1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.869368 5049 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.869414 5049 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.869427 5049 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.869439 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2fcf\" (UniqueName: \"kubernetes.io/projected/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-kube-api-access-s2fcf\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.869453 5049 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.869463 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:55 crc kubenswrapper[5049]: I0127 17:03:55.869474 5049 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/96e75cde-66e8-4ab2-b715-3b07b34bc3a1-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 17:03:56 crc kubenswrapper[5049]: I0127 17:03:56.321854 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" event={"ID":"96e75cde-66e8-4ab2-b715-3b07b34bc3a1","Type":"ContainerDied","Data":"e26d4ff67846f7719631f05d03fb24fe14534f0d6bec3d1238c001e817fa4bde"} Jan 27 17:03:56 crc kubenswrapper[5049]: I0127 17:03:56.321910 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ttw4x" Jan 27 17:03:56 crc kubenswrapper[5049]: I0127 17:03:56.321913 5049 scope.go:117] "RemoveContainer" containerID="c028e7ef21355a533b551c7c94cc7e6dbf79efbe2cb2407698bea42f24902281" Jan 27 17:03:56 crc kubenswrapper[5049]: I0127 17:03:56.348005 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttw4x"] Jan 27 17:03:56 crc kubenswrapper[5049]: I0127 17:03:56.357486 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ttw4x"] Jan 27 17:03:57 crc kubenswrapper[5049]: I0127 17:03:57.659628 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e75cde-66e8-4ab2-b715-3b07b34bc3a1" path="/var/lib/kubelet/pods/96e75cde-66e8-4ab2-b715-3b07b34bc3a1/volumes" Jan 27 17:04:17 crc kubenswrapper[5049]: I0127 17:04:17.781964 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:04:17 crc kubenswrapper[5049]: I0127 17:04:17.782521 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:04:17 crc kubenswrapper[5049]: I0127 17:04:17.782632 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:04:17 crc kubenswrapper[5049]: I0127 17:04:17.783836 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea459288717ffc93888a4def2a377fe87f81dd7aad6264194cff040e79562fcf"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:04:17 crc kubenswrapper[5049]: I0127 17:04:17.783967 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://ea459288717ffc93888a4def2a377fe87f81dd7aad6264194cff040e79562fcf" gracePeriod=600 Jan 27 17:04:18 crc kubenswrapper[5049]: I0127 17:04:18.462667 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="ea459288717ffc93888a4def2a377fe87f81dd7aad6264194cff040e79562fcf" exitCode=0 Jan 27 17:04:18 crc kubenswrapper[5049]: I0127 17:04:18.462793 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"ea459288717ffc93888a4def2a377fe87f81dd7aad6264194cff040e79562fcf"} Jan 27 17:04:18 crc kubenswrapper[5049]: I0127 17:04:18.463542 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"419d0791d576cddc4dba7f8228b001c562014997ef6e0484d641d20bb31d00ea"} Jan 27 17:04:18 crc kubenswrapper[5049]: I0127 17:04:18.463577 5049 scope.go:117] "RemoveContainer" containerID="e43dbe4ae8ff39cdc820ad8502bee1d94a3080b654db3acb0dfc134a2b89c701" Jan 27 17:06:47 crc kubenswrapper[5049]: I0127 17:06:47.782534 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:06:47 crc kubenswrapper[5049]: I0127 17:06:47.783578 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:07:17 crc kubenswrapper[5049]: I0127 17:07:17.782194 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:07:17 crc kubenswrapper[5049]: I0127 17:07:17.783001 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:07:47 crc kubenswrapper[5049]: I0127 17:07:47.782020 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:07:47 crc kubenswrapper[5049]: I0127 17:07:47.782574 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:07:47 crc kubenswrapper[5049]: I0127 17:07:47.782633 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:07:47 crc kubenswrapper[5049]: I0127 17:07:47.813433 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"419d0791d576cddc4dba7f8228b001c562014997ef6e0484d641d20bb31d00ea"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:07:47 crc kubenswrapper[5049]: I0127 17:07:47.813536 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://419d0791d576cddc4dba7f8228b001c562014997ef6e0484d641d20bb31d00ea" gracePeriod=600 Jan 27 17:07:48 crc kubenswrapper[5049]: I0127 17:07:48.823753 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="419d0791d576cddc4dba7f8228b001c562014997ef6e0484d641d20bb31d00ea" exitCode=0 Jan 27 17:07:48 crc kubenswrapper[5049]: I0127 17:07:48.823848 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"419d0791d576cddc4dba7f8228b001c562014997ef6e0484d641d20bb31d00ea"} Jan 27 17:07:48 crc kubenswrapper[5049]: I0127 17:07:48.824513 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"d1aa9223cd763227032c3196c83813fd302f48bd7085cca520f2fac4b65a3aa4"} Jan 27 17:07:48 crc kubenswrapper[5049]: I0127 17:07:48.824544 5049 scope.go:117] "RemoveContainer" containerID="ea459288717ffc93888a4def2a377fe87f81dd7aad6264194cff040e79562fcf" Jan 27 17:09:48 crc kubenswrapper[5049]: I0127 17:09:48.258172 5049 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.513633 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmzbf"] Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.514922 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovn-controller" containerID="cri-o://bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" gracePeriod=30 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.515020 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd" gracePeriod=30 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.515022 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="nbdb" containerID="cri-o://3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9" gracePeriod=30 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.515123 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovn-acl-logging" containerID="cri-o://4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030" gracePeriod=30 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.515114 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="northd" containerID="cri-o://a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8" gracePeriod=30 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.515189 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kube-rbac-proxy-node" containerID="cri-o://cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd" gracePeriod=30 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.515206 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="sbdb" containerID="cri-o://cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50" gracePeriod=30 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.563180 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" containerID="cri-o://bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac" gracePeriod=30 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.687420 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/2.log" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.688125 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/1.log" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.688194 5049 generic.go:334] "Generic (PLEG): container finished" podID="7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b" containerID="9dbc006b8f9a5b749592de92d059f3931ab80241487ffca677bd8d2d860efbbb" exitCode=2 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.688276 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hc4th" event={"ID":"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b","Type":"ContainerDied","Data":"9dbc006b8f9a5b749592de92d059f3931ab80241487ffca677bd8d2d860efbbb"} Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.688404 5049 scope.go:117] "RemoveContainer" containerID="836b443e3565d68c8d2b62b22874ce3ba84e9c4088924b18c8aafffd4ff804f0" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.689067 5049 scope.go:117] "RemoveContainer" containerID="9dbc006b8f9a5b749592de92d059f3931ab80241487ffca677bd8d2d860efbbb" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.690999 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/3.log" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.699301 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovn-acl-logging/0.log" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.713663 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovn-controller/0.log" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.714441 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd" exitCode=0 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.714470 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd" exitCode=0 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.714478 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030" exitCode=143 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.714486 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" exitCode=143 Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.714490 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd"} Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.714520 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd"} Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.714529 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030"} Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.714538 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f"} Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.839825 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/3.log" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.844165 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovn-acl-logging/0.log" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.845073 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovn-controller/0.log" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.845834 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.922379 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j2m7s"] Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.922926 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kube-rbac-proxy-node" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.922951 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kube-rbac-proxy-node" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.922973 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.922986 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923002 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovn-acl-logging" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923016 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovn-acl-logging" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923039 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovn-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923052 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovn-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923072 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e75cde-66e8-4ab2-b715-3b07b34bc3a1" containerName="registry" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923129 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e75cde-66e8-4ab2-b715-3b07b34bc3a1" containerName="registry" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923147 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923161 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923177 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="sbdb" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923190 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="sbdb" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923212 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="nbdb" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923224 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="nbdb" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923246 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kubecfg-setup" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923261 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kubecfg-setup" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923280 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923294 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923319 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923339 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923359 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="northd" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923442 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="northd" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.923466 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923483 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923670 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kube-rbac-proxy-node" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923731 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="nbdb" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923752 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923769 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923786 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="northd" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923800 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovn-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923818 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923834 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="sbdb" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923851 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e75cde-66e8-4ab2-b715-3b07b34bc3a1" containerName="registry" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923875 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923890 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.923906 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovn-acl-logging" Jan 27 17:10:04 crc kubenswrapper[5049]: E0127 17:10:04.924278 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.924295 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.924486 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerName="ovnkube-controller" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.927862 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968633 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-systemd-units\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968736 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-slash\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968765 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-bin\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968811 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovn-node-metrics-cert\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968844 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-ovn-kubernetes\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968874 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-log-socket\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968914 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-netd\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968950 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-ovn\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.968996 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-openvswitch\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969021 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-node-log\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969047 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-kubelet\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969079 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-script-lib\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969106 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-etc-openvswitch\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969155 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pflv\" (UniqueName: \"kubernetes.io/projected/b0ca704c-b740-43c4-845f-7de5bfa5a29c-kube-api-access-6pflv\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969188 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-var-lib-openvswitch\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969227 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-env-overrides\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969264 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-systemd\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969298 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-netns\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969329 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-config\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969360 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\" (UID: \"b0ca704c-b740-43c4-845f-7de5bfa5a29c\") " Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969763 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969834 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969869 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-slash" (OuterVolumeSpecName: "host-slash") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.969903 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971253 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971314 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971324 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971401 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-log-socket" (OuterVolumeSpecName: "log-socket") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971480 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-node-log" (OuterVolumeSpecName: "node-log") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971481 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971523 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971549 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971564 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971594 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971827 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.971883 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.972120 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.977743 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0ca704c-b740-43c4-845f-7de5bfa5a29c-kube-api-access-6pflv" (OuterVolumeSpecName: "kube-api-access-6pflv") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "kube-api-access-6pflv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.978016 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:10:04 crc kubenswrapper[5049]: I0127 17:10:04.986961 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b0ca704c-b740-43c4-845f-7de5bfa5a29c" (UID: "b0ca704c-b740-43c4-845f-7de5bfa5a29c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.070537 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-etc-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.070597 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-systemd-units\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.070637 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-ovn\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.070702 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.070810 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-systemd\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.070874 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.070920 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-node-log\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.070949 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovnkube-config\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071006 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-slash\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071047 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-run-netns\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071088 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-cni-bin\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071161 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-var-lib-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071235 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071280 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovn-node-metrics-cert\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071406 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-log-socket\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071461 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc6cq\" (UniqueName: \"kubernetes.io/projected/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-kube-api-access-mc6cq\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071503 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-env-overrides\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071533 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-kubelet\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071629 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovnkube-script-lib\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071712 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-cni-netd\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071932 5049 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071964 5049 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071982 5049 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.071998 5049 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072016 5049 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072033 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pflv\" (UniqueName: \"kubernetes.io/projected/b0ca704c-b740-43c4-845f-7de5bfa5a29c-kube-api-access-6pflv\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072053 5049 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072069 5049 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072087 5049 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072107 5049 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072124 5049 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072148 5049 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072174 5049 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072200 5049 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072217 5049 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072234 5049 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b0ca704c-b740-43c4-845f-7de5bfa5a29c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072274 5049 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072293 5049 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072310 5049 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.072328 5049 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0ca704c-b740-43c4-845f-7de5bfa5a29c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173459 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-env-overrides\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173499 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-kubelet\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173528 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovnkube-script-lib\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173546 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-cni-netd\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173561 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-etc-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173575 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-systemd-units\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173594 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-ovn\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173613 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173632 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-systemd\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173651 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173645 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-cni-netd\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173706 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-kubelet\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173727 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-etc-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173671 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovnkube-config\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173762 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-systemd-units\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173785 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-ovn\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173781 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-node-log\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173813 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-slash\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173813 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-node-log\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173829 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-run-netns\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173848 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-cni-bin\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173858 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-systemd\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173865 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-var-lib-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173897 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-run-ovn-kubernetes\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173918 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-slash\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173940 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-run-netns\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173963 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173881 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-var-lib-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173995 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.174019 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-run-openvswitch\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.174017 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovn-node-metrics-cert\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.174076 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-log-socket\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.174109 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc6cq\" (UniqueName: \"kubernetes.io/projected/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-kube-api-access-mc6cq\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.174394 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovnkube-config\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.174403 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovnkube-script-lib\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.173971 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-host-cni-bin\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.174446 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-log-socket\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.174510 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-env-overrides\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.182108 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-ovn-node-metrics-cert\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.189162 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc6cq\" (UniqueName: \"kubernetes.io/projected/2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9-kube-api-access-mc6cq\") pod \"ovnkube-node-j2m7s\" (UID: \"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9\") " pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.245443 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:05 crc kubenswrapper[5049]: W0127 17:10:05.277574 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f34a0ef_e0c4_4d4e_b827_b5955eb1d4e9.slice/crio-f606cac2cbb98a68bde11125d055118a86a4ba2ab1cb9fbee944148cba8a859d WatchSource:0}: Error finding container f606cac2cbb98a68bde11125d055118a86a4ba2ab1cb9fbee944148cba8a859d: Status 404 returned error can't find the container with id f606cac2cbb98a68bde11125d055118a86a4ba2ab1cb9fbee944148cba8a859d Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.482254 5049 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f34a0ef_e0c4_4d4e_b827_b5955eb1d4e9.slice/crio-conmon-28ecf4e8134e97901004d52e5c3dddc124dfcee87875525948ea4de282cf3791.scope\": RecentStats: unable to find data in memory cache]" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.723025 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hc4th_7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b/kube-multus/2.log" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.723147 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hc4th" event={"ID":"7e5bfbd4-6a4f-4639-92d0-bdc25dd1611b","Type":"ContainerStarted","Data":"ef634d257d6f6142e3a915509dfd3e1a780de0c76888dcc6b2471a1d99d0c3f4"} Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.724634 5049 generic.go:334] "Generic (PLEG): container finished" podID="2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9" containerID="28ecf4e8134e97901004d52e5c3dddc124dfcee87875525948ea4de282cf3791" exitCode=0 Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.724739 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerDied","Data":"28ecf4e8134e97901004d52e5c3dddc124dfcee87875525948ea4de282cf3791"} Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.725115 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"f606cac2cbb98a68bde11125d055118a86a4ba2ab1cb9fbee944148cba8a859d"} Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.729017 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovnkube-controller/3.log" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.731292 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovn-acl-logging/0.log" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.731702 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zmzbf_b0ca704c-b740-43c4-845f-7de5bfa5a29c/ovn-controller/0.log" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732023 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac" exitCode=0 Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732042 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50" exitCode=0 Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732051 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9" exitCode=0 Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732057 5049 generic.go:334] "Generic (PLEG): container finished" podID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" containerID="a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8" exitCode=0 Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732061 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac"} Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732096 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50"} Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732132 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9"} Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732147 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8"} Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732162 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" event={"ID":"b0ca704c-b740-43c4-845f-7de5bfa5a29c","Type":"ContainerDied","Data":"69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06"} Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.732169 5049 scope.go:117] "RemoveContainer" containerID="bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.733247 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmzbf" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.753813 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.780018 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmzbf"] Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.783155 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmzbf"] Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.803917 5049 scope.go:117] "RemoveContainer" containerID="cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.833698 5049 scope.go:117] "RemoveContainer" containerID="3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.849765 5049 scope.go:117] "RemoveContainer" containerID="a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.864735 5049 scope.go:117] "RemoveContainer" containerID="de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.882491 5049 scope.go:117] "RemoveContainer" containerID="cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.910272 5049 scope.go:117] "RemoveContainer" containerID="4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.939355 5049 scope.go:117] "RemoveContainer" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.945080 5049 scope.go:117] "RemoveContainer" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.974968 5049 scope.go:117] "RemoveContainer" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.975235 5049 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-controller_ovnkube-node-zmzbf_openshift-ovn-kubernetes_b0ca704c-b740-43c4-845f-7de5bfa5a29c_0 in pod sandbox 69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06 from index: no such id: 'bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f'" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.975274 5049 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_ovn-controller_ovnkube-node-zmzbf_openshift-ovn-kubernetes_b0ca704c-b740-43c4-845f-7de5bfa5a29c_0 in pod sandbox 69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06 from index: no such id: 'bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f'" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.975295 5049 scope.go:117] "RemoveContainer" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.992222 5049 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kubecfg-setup_ovnkube-node-zmzbf_openshift-ovn-kubernetes_b0ca704c-b740-43c4-845f-7de5bfa5a29c_0 in pod sandbox 69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06: identifier is not a container" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.992265 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0"} err="rpc error: code = Unknown desc = failed to delete container k8s_kubecfg-setup_ovnkube-node-zmzbf_openshift-ovn-kubernetes_b0ca704c-b740-43c4-845f-7de5bfa5a29c_0 in pod sandbox 69c7d0a29280dc2dee96bf8941c6fc98faccbbb24726626c9dccb8754d022c06: identifier is not a container" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.992292 5049 scope.go:117] "RemoveContainer" containerID="bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.993349 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": container with ID starting with bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac not found: ID does not exist" containerID="bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.993376 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac"} err="failed to get container status \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": rpc error: code = NotFound desc = could not find container \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": container with ID starting with bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.993391 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.993660 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": container with ID starting with ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909 not found: ID does not exist" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.993696 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909"} err="failed to get container status \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": rpc error: code = NotFound desc = could not find container \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": container with ID starting with ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909 not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.993708 5049 scope.go:117] "RemoveContainer" containerID="cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.993957 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": container with ID starting with cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50 not found: ID does not exist" containerID="cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.994002 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50"} err="failed to get container status \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": rpc error: code = NotFound desc = could not find container \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": container with ID starting with cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50 not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.994016 5049 scope.go:117] "RemoveContainer" containerID="3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.994219 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": container with ID starting with 3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9 not found: ID does not exist" containerID="3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.994239 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9"} err="failed to get container status \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": rpc error: code = NotFound desc = could not find container \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": container with ID starting with 3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9 not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.994253 5049 scope.go:117] "RemoveContainer" containerID="a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.994682 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": container with ID starting with a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8 not found: ID does not exist" containerID="a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.994708 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8"} err="failed to get container status \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": rpc error: code = NotFound desc = could not find container \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": container with ID starting with a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8 not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.994721 5049 scope.go:117] "RemoveContainer" containerID="de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.995214 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": container with ID starting with de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd not found: ID does not exist" containerID="de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.995271 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd"} err="failed to get container status \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": rpc error: code = NotFound desc = could not find container \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": container with ID starting with de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.995314 5049 scope.go:117] "RemoveContainer" containerID="cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.996083 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": container with ID starting with cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd not found: ID does not exist" containerID="cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.996105 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd"} err="failed to get container status \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": rpc error: code = NotFound desc = could not find container \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": container with ID starting with cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.996117 5049 scope.go:117] "RemoveContainer" containerID="4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.996771 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": container with ID starting with 4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030 not found: ID does not exist" containerID="4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.996799 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030"} err="failed to get container status \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": rpc error: code = NotFound desc = could not find container \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": container with ID starting with 4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030 not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.996817 5049 scope.go:117] "RemoveContainer" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:05 crc kubenswrapper[5049]: E0127 17:10:05.997771 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": container with ID starting with bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f not found: ID does not exist" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.997794 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f"} err="failed to get container status \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": rpc error: code = NotFound desc = could not find container \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": container with ID starting with bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f not found: ID does not exist" Jan 27 17:10:05 crc kubenswrapper[5049]: I0127 17:10:05.997807 5049 scope.go:117] "RemoveContainer" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" Jan 27 17:10:06 crc kubenswrapper[5049]: E0127 17:10:06.000018 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": container with ID starting with ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0 not found: ID does not exist" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.000041 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0"} err="failed to get container status \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": rpc error: code = NotFound desc = could not find container \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": container with ID starting with ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.000054 5049 scope.go:117] "RemoveContainer" containerID="bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.000368 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac"} err="failed to get container status \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": rpc error: code = NotFound desc = could not find container \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": container with ID starting with bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.000388 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.000632 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909"} err="failed to get container status \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": rpc error: code = NotFound desc = could not find container \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": container with ID starting with ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.000953 5049 scope.go:117] "RemoveContainer" containerID="cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.001392 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50"} err="failed to get container status \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": rpc error: code = NotFound desc = could not find container \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": container with ID starting with cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.001412 5049 scope.go:117] "RemoveContainer" containerID="3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.001629 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9"} err="failed to get container status \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": rpc error: code = NotFound desc = could not find container \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": container with ID starting with 3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.001649 5049 scope.go:117] "RemoveContainer" containerID="a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.001985 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8"} err="failed to get container status \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": rpc error: code = NotFound desc = could not find container \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": container with ID starting with a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.002004 5049 scope.go:117] "RemoveContainer" containerID="de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.002261 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd"} err="failed to get container status \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": rpc error: code = NotFound desc = could not find container \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": container with ID starting with de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.002282 5049 scope.go:117] "RemoveContainer" containerID="cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.002555 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd"} err="failed to get container status \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": rpc error: code = NotFound desc = could not find container \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": container with ID starting with cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.002583 5049 scope.go:117] "RemoveContainer" containerID="4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.002851 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030"} err="failed to get container status \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": rpc error: code = NotFound desc = could not find container \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": container with ID starting with 4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.002870 5049 scope.go:117] "RemoveContainer" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.003102 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f"} err="failed to get container status \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": rpc error: code = NotFound desc = could not find container \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": container with ID starting with bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.003130 5049 scope.go:117] "RemoveContainer" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.003351 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0"} err="failed to get container status \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": rpc error: code = NotFound desc = could not find container \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": container with ID starting with ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.003382 5049 scope.go:117] "RemoveContainer" containerID="bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.003770 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac"} err="failed to get container status \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": rpc error: code = NotFound desc = could not find container \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": container with ID starting with bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.003789 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.004064 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909"} err="failed to get container status \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": rpc error: code = NotFound desc = could not find container \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": container with ID starting with ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.004093 5049 scope.go:117] "RemoveContainer" containerID="cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.004300 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50"} err="failed to get container status \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": rpc error: code = NotFound desc = could not find container \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": container with ID starting with cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.004430 5049 scope.go:117] "RemoveContainer" containerID="3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.004866 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9"} err="failed to get container status \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": rpc error: code = NotFound desc = could not find container \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": container with ID starting with 3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.004888 5049 scope.go:117] "RemoveContainer" containerID="a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.005132 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8"} err="failed to get container status \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": rpc error: code = NotFound desc = could not find container \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": container with ID starting with a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.005150 5049 scope.go:117] "RemoveContainer" containerID="de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.005511 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd"} err="failed to get container status \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": rpc error: code = NotFound desc = could not find container \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": container with ID starting with de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.005530 5049 scope.go:117] "RemoveContainer" containerID="cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.005864 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd"} err="failed to get container status \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": rpc error: code = NotFound desc = could not find container \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": container with ID starting with cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.005884 5049 scope.go:117] "RemoveContainer" containerID="4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.006128 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030"} err="failed to get container status \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": rpc error: code = NotFound desc = could not find container \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": container with ID starting with 4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.006146 5049 scope.go:117] "RemoveContainer" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.006457 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f"} err="failed to get container status \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": rpc error: code = NotFound desc = could not find container \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": container with ID starting with bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.006474 5049 scope.go:117] "RemoveContainer" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.006761 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0"} err="failed to get container status \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": rpc error: code = NotFound desc = could not find container \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": container with ID starting with ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.006780 5049 scope.go:117] "RemoveContainer" containerID="bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.007832 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac"} err="failed to get container status \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": rpc error: code = NotFound desc = could not find container \"bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac\": container with ID starting with bf43c14751e8051ae9b5d762be14c2d65b6fc52e6ae5b66d9720070b0dc0a2ac not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.007852 5049 scope.go:117] "RemoveContainer" containerID="ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.008127 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909"} err="failed to get container status \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": rpc error: code = NotFound desc = could not find container \"ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909\": container with ID starting with ab5fb8cd6b1dd7741ff0aeb58417259d78a4645ecbc2ef52eb9d828504e23909 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.008169 5049 scope.go:117] "RemoveContainer" containerID="cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.008420 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50"} err="failed to get container status \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": rpc error: code = NotFound desc = could not find container \"cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50\": container with ID starting with cb482edd6eec8cf295467200d15b38d2f384ce6172f6d35dad93e383dcda6b50 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.008445 5049 scope.go:117] "RemoveContainer" containerID="3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.008818 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9"} err="failed to get container status \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": rpc error: code = NotFound desc = could not find container \"3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9\": container with ID starting with 3d5c39a394c659c2675346fce03579541906cc2c6d21665125d0e0db677cf1e9 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.008844 5049 scope.go:117] "RemoveContainer" containerID="a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.009131 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8"} err="failed to get container status \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": rpc error: code = NotFound desc = could not find container \"a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8\": container with ID starting with a6a12eb71efe201cd3dbfed6b6d7bcdcfe9762c46a60bcb942e927d0e1d9e6f8 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.009151 5049 scope.go:117] "RemoveContainer" containerID="de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.009451 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd"} err="failed to get container status \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": rpc error: code = NotFound desc = could not find container \"de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd\": container with ID starting with de98c3845757c70e7ba38e7c7cc77aca95d339329d209829cc5b21fbb6af17fd not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.009475 5049 scope.go:117] "RemoveContainer" containerID="cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.009650 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd"} err="failed to get container status \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": rpc error: code = NotFound desc = could not find container \"cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd\": container with ID starting with cf43612f9065ab03f2eb7578701e56ad9514af098c404f1f86848713bb0ed6fd not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.009847 5049 scope.go:117] "RemoveContainer" containerID="4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.010233 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030"} err="failed to get container status \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": rpc error: code = NotFound desc = could not find container \"4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030\": container with ID starting with 4e2079cf5c5db4dff78cd351e02f03274580d99487683e6e79b7f9fc8ac81030 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.010264 5049 scope.go:117] "RemoveContainer" containerID="bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.010783 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f"} err="failed to get container status \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": rpc error: code = NotFound desc = could not find container \"bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f\": container with ID starting with bcb44fcbea64d4e588760d58011245b51085f6c37cb3bc7233ba35816701f50f not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.010809 5049 scope.go:117] "RemoveContainer" containerID="ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.011216 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0"} err="failed to get container status \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": rpc error: code = NotFound desc = could not find container \"ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0\": container with ID starting with ef00f282f4e551d816d41b1c78342380cd185085704356efd64540ddef830db0 not found: ID does not exist" Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.742007 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"f5dbbe464506ee04db0ec19c6a4fd84d88d567c86512e78b53301944b037544a"} Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.742057 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"9b4e2a626b8b9cf27a7fd9b66eb2622b600f1c2305fa4f19e89ff79e3b16d196"} Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.742074 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"05d0f12867a603bd6b6f742ee0e1a625a6d88a1e3c0dc7a454d9545bc56f88a9"} Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.742087 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"f50c499fac3713dd1a01010afdd8b2a68f2bc43bedd207c94f1b4a13cf96d7b6"} Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.742101 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"33bef08013bc48164ee4fed6ea22b2b1e72071414badc4046ad1d10bc2ab9c2a"} Jan 27 17:10:06 crc kubenswrapper[5049]: I0127 17:10:06.742115 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"68aa99a0cc301ce8d016ac584b863cbd3cf951e91dbe2ddce14edb736634e95a"} Jan 27 17:10:07 crc kubenswrapper[5049]: I0127 17:10:07.656302 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0ca704c-b740-43c4-845f-7de5bfa5a29c" path="/var/lib/kubelet/pods/b0ca704c-b740-43c4-845f-7de5bfa5a29c/volumes" Jan 27 17:10:08 crc kubenswrapper[5049]: I0127 17:10:08.754359 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"fb5228ade11a7c53c938ddaee1f412ede01442ba441b4f5a55c530fd908bc07a"} Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.780055 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" event={"ID":"2f34a0ef-e0c4-4d4e-b827-b5955eb1d4e9","Type":"ContainerStarted","Data":"1c08d280798c75708a73a49ea8e86b2097657b2ffbe885773d3828c936e19939"} Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.780792 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.780882 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.780910 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.818927 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" podStartSLOduration=7.818898583 podStartE2EDuration="7.818898583s" podCreationTimestamp="2026-01-27 17:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:10:11.815076442 +0000 UTC m=+786.914049991" watchObservedRunningTime="2026-01-27 17:10:11.818898583 +0000 UTC m=+786.917872132" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.826131 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-hrj6g"] Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.827070 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.827201 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.831542 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.832070 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.834939 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.835140 5049 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-665z9" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.840229 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.968198 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/029840ba-bdf3-4afb-a8d7-93c86d641dd9-node-mnt\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.968247 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/029840ba-bdf3-4afb-a8d7-93c86d641dd9-crc-storage\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:11 crc kubenswrapper[5049]: I0127 17:10:11.968532 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsjp4\" (UniqueName: \"kubernetes.io/projected/029840ba-bdf3-4afb-a8d7-93c86d641dd9-kube-api-access-vsjp4\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.069717 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsjp4\" (UniqueName: \"kubernetes.io/projected/029840ba-bdf3-4afb-a8d7-93c86d641dd9-kube-api-access-vsjp4\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.069824 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/029840ba-bdf3-4afb-a8d7-93c86d641dd9-node-mnt\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.069863 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/029840ba-bdf3-4afb-a8d7-93c86d641dd9-crc-storage\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.070232 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/029840ba-bdf3-4afb-a8d7-93c86d641dd9-node-mnt\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.070565 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/029840ba-bdf3-4afb-a8d7-93c86d641dd9-crc-storage\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.102203 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsjp4\" (UniqueName: \"kubernetes.io/projected/029840ba-bdf3-4afb-a8d7-93c86d641dd9-kube-api-access-vsjp4\") pod \"crc-storage-crc-hrj6g\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.147008 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: E0127 17:10:12.168546 5049 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hrj6g_crc-storage_029840ba-bdf3-4afb-a8d7-93c86d641dd9_0(f1d7f8fed1e5a6f046d9e50ccf595f54fa7aec8dbb55c0e03c3b4f212f58a0e3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 17:10:12 crc kubenswrapper[5049]: E0127 17:10:12.168699 5049 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hrj6g_crc-storage_029840ba-bdf3-4afb-a8d7-93c86d641dd9_0(f1d7f8fed1e5a6f046d9e50ccf595f54fa7aec8dbb55c0e03c3b4f212f58a0e3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: E0127 17:10:12.168799 5049 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hrj6g_crc-storage_029840ba-bdf3-4afb-a8d7-93c86d641dd9_0(f1d7f8fed1e5a6f046d9e50ccf595f54fa7aec8dbb55c0e03c3b4f212f58a0e3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: E0127 17:10:12.168890 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hrj6g_crc-storage(029840ba-bdf3-4afb-a8d7-93c86d641dd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hrj6g_crc-storage(029840ba-bdf3-4afb-a8d7-93c86d641dd9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hrj6g_crc-storage_029840ba-bdf3-4afb-a8d7-93c86d641dd9_0(f1d7f8fed1e5a6f046d9e50ccf595f54fa7aec8dbb55c0e03c3b4f212f58a0e3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hrj6g" podUID="029840ba-bdf3-4afb-a8d7-93c86d641dd9" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.534003 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hrj6g"] Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.784878 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: I0127 17:10:12.785316 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: E0127 17:10:12.810760 5049 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hrj6g_crc-storage_029840ba-bdf3-4afb-a8d7-93c86d641dd9_0(71f6ab3d44c014ff6fbc071054562642cc26ba71499c64c522b72fac6d8ded94): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 17:10:12 crc kubenswrapper[5049]: E0127 17:10:12.810845 5049 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hrj6g_crc-storage_029840ba-bdf3-4afb-a8d7-93c86d641dd9_0(71f6ab3d44c014ff6fbc071054562642cc26ba71499c64c522b72fac6d8ded94): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: E0127 17:10:12.810875 5049 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hrj6g_crc-storage_029840ba-bdf3-4afb-a8d7-93c86d641dd9_0(71f6ab3d44c014ff6fbc071054562642cc26ba71499c64c522b72fac6d8ded94): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:12 crc kubenswrapper[5049]: E0127 17:10:12.810929 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-hrj6g_crc-storage(029840ba-bdf3-4afb-a8d7-93c86d641dd9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-hrj6g_crc-storage(029840ba-bdf3-4afb-a8d7-93c86d641dd9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-hrj6g_crc-storage_029840ba-bdf3-4afb-a8d7-93c86d641dd9_0(71f6ab3d44c014ff6fbc071054562642cc26ba71499c64c522b72fac6d8ded94): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-hrj6g" podUID="029840ba-bdf3-4afb-a8d7-93c86d641dd9" Jan 27 17:10:17 crc kubenswrapper[5049]: I0127 17:10:17.781787 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:10:17 crc kubenswrapper[5049]: I0127 17:10:17.782449 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:10:24 crc kubenswrapper[5049]: I0127 17:10:24.645469 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:24 crc kubenswrapper[5049]: I0127 17:10:24.646142 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:25 crc kubenswrapper[5049]: I0127 17:10:25.118754 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hrj6g"] Jan 27 17:10:25 crc kubenswrapper[5049]: I0127 17:10:25.134287 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:10:25 crc kubenswrapper[5049]: I0127 17:10:25.865839 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hrj6g" event={"ID":"029840ba-bdf3-4afb-a8d7-93c86d641dd9","Type":"ContainerStarted","Data":"7a8506c78928074ee432a141d05d086c9c2c259a44880cb8456a8859028383db"} Jan 27 17:10:26 crc kubenswrapper[5049]: I0127 17:10:26.872251 5049 generic.go:334] "Generic (PLEG): container finished" podID="029840ba-bdf3-4afb-a8d7-93c86d641dd9" containerID="882c9fa655df72c653de293af240dc8b0c752c924ee65fdc7ba30e208b3972f0" exitCode=0 Jan 27 17:10:26 crc kubenswrapper[5049]: I0127 17:10:26.872359 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hrj6g" event={"ID":"029840ba-bdf3-4afb-a8d7-93c86d641dd9","Type":"ContainerDied","Data":"882c9fa655df72c653de293af240dc8b0c752c924ee65fdc7ba30e208b3972f0"} Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.097704 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.241375 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/029840ba-bdf3-4afb-a8d7-93c86d641dd9-crc-storage\") pod \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.241764 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsjp4\" (UniqueName: \"kubernetes.io/projected/029840ba-bdf3-4afb-a8d7-93c86d641dd9-kube-api-access-vsjp4\") pod \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.241816 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/029840ba-bdf3-4afb-a8d7-93c86d641dd9-node-mnt\") pod \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\" (UID: \"029840ba-bdf3-4afb-a8d7-93c86d641dd9\") " Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.242070 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029840ba-bdf3-4afb-a8d7-93c86d641dd9-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "029840ba-bdf3-4afb-a8d7-93c86d641dd9" (UID: "029840ba-bdf3-4afb-a8d7-93c86d641dd9"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.246657 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/029840ba-bdf3-4afb-a8d7-93c86d641dd9-kube-api-access-vsjp4" (OuterVolumeSpecName: "kube-api-access-vsjp4") pod "029840ba-bdf3-4afb-a8d7-93c86d641dd9" (UID: "029840ba-bdf3-4afb-a8d7-93c86d641dd9"). InnerVolumeSpecName "kube-api-access-vsjp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.258116 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/029840ba-bdf3-4afb-a8d7-93c86d641dd9-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "029840ba-bdf3-4afb-a8d7-93c86d641dd9" (UID: "029840ba-bdf3-4afb-a8d7-93c86d641dd9"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.343527 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsjp4\" (UniqueName: \"kubernetes.io/projected/029840ba-bdf3-4afb-a8d7-93c86d641dd9-kube-api-access-vsjp4\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.343575 5049 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/029840ba-bdf3-4afb-a8d7-93c86d641dd9-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.343588 5049 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/029840ba-bdf3-4afb-a8d7-93c86d641dd9-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.888221 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hrj6g" event={"ID":"029840ba-bdf3-4afb-a8d7-93c86d641dd9","Type":"ContainerDied","Data":"7a8506c78928074ee432a141d05d086c9c2c259a44880cb8456a8859028383db"} Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.888290 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8506c78928074ee432a141d05d086c9c2c259a44880cb8456a8859028383db" Jan 27 17:10:28 crc kubenswrapper[5049]: I0127 17:10:28.888332 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hrj6g" Jan 27 17:10:35 crc kubenswrapper[5049]: I0127 17:10:35.269186 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j2m7s" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.229517 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5"] Jan 27 17:10:36 crc kubenswrapper[5049]: E0127 17:10:36.230036 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="029840ba-bdf3-4afb-a8d7-93c86d641dd9" containerName="storage" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.230052 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="029840ba-bdf3-4afb-a8d7-93c86d641dd9" containerName="storage" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.230168 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="029840ba-bdf3-4afb-a8d7-93c86d641dd9" containerName="storage" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.231034 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.232636 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.247826 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5"] Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.345221 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.345296 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.345464 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ss6k\" (UniqueName: \"kubernetes.io/projected/16b53c32-6621-4bb5-b6c7-1b929414dd8c-kube-api-access-4ss6k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.446636 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.446699 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.446740 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ss6k\" (UniqueName: \"kubernetes.io/projected/16b53c32-6621-4bb5-b6c7-1b929414dd8c-kube-api-access-4ss6k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.447477 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.447495 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.473575 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ss6k\" (UniqueName: \"kubernetes.io/projected/16b53c32-6621-4bb5-b6c7-1b929414dd8c-kube-api-access-4ss6k\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.545840 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.789108 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5"] Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.940324 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" event={"ID":"16b53c32-6621-4bb5-b6c7-1b929414dd8c","Type":"ContainerStarted","Data":"685d787c6b3fe177e16149116d880e8032a72d42247fa5500b002cb92370cb15"} Jan 27 17:10:36 crc kubenswrapper[5049]: I0127 17:10:36.940434 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" event={"ID":"16b53c32-6621-4bb5-b6c7-1b929414dd8c","Type":"ContainerStarted","Data":"f24f6b55c5a10baddd85b01164850970bd7b9648731c3ed8cc88fb6ac1833084"} Jan 27 17:10:37 crc kubenswrapper[5049]: I0127 17:10:37.949583 5049 generic.go:334] "Generic (PLEG): container finished" podID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerID="685d787c6b3fe177e16149116d880e8032a72d42247fa5500b002cb92370cb15" exitCode=0 Jan 27 17:10:37 crc kubenswrapper[5049]: I0127 17:10:37.949728 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" event={"ID":"16b53c32-6621-4bb5-b6c7-1b929414dd8c","Type":"ContainerDied","Data":"685d787c6b3fe177e16149116d880e8032a72d42247fa5500b002cb92370cb15"} Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.547297 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7q49z"] Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.548582 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.552404 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7q49z"] Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.674962 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-utilities\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.675035 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87djb\" (UniqueName: \"kubernetes.io/projected/86011ac7-6366-423a-9b2b-2d217622df13-kube-api-access-87djb\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.675086 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-catalog-content\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.776215 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-catalog-content\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.776376 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-utilities\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.776445 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87djb\" (UniqueName: \"kubernetes.io/projected/86011ac7-6366-423a-9b2b-2d217622df13-kube-api-access-87djb\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.777071 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-catalog-content\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.777281 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-utilities\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.799789 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87djb\" (UniqueName: \"kubernetes.io/projected/86011ac7-6366-423a-9b2b-2d217622df13-kube-api-access-87djb\") pod \"redhat-operators-7q49z\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:38 crc kubenswrapper[5049]: I0127 17:10:38.868342 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:39 crc kubenswrapper[5049]: I0127 17:10:39.105551 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7q49z"] Jan 27 17:10:39 crc kubenswrapper[5049]: W0127 17:10:39.114197 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86011ac7_6366_423a_9b2b_2d217622df13.slice/crio-3b92fb2b1141195ba86f15ec8e55858d464b6c14e317d35bcae0c5b626ba72c6 WatchSource:0}: Error finding container 3b92fb2b1141195ba86f15ec8e55858d464b6c14e317d35bcae0c5b626ba72c6: Status 404 returned error can't find the container with id 3b92fb2b1141195ba86f15ec8e55858d464b6c14e317d35bcae0c5b626ba72c6 Jan 27 17:10:39 crc kubenswrapper[5049]: I0127 17:10:39.960869 5049 generic.go:334] "Generic (PLEG): container finished" podID="86011ac7-6366-423a-9b2b-2d217622df13" containerID="9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6" exitCode=0 Jan 27 17:10:39 crc kubenswrapper[5049]: I0127 17:10:39.960949 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q49z" event={"ID":"86011ac7-6366-423a-9b2b-2d217622df13","Type":"ContainerDied","Data":"9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6"} Jan 27 17:10:39 crc kubenswrapper[5049]: I0127 17:10:39.960981 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q49z" event={"ID":"86011ac7-6366-423a-9b2b-2d217622df13","Type":"ContainerStarted","Data":"3b92fb2b1141195ba86f15ec8e55858d464b6c14e317d35bcae0c5b626ba72c6"} Jan 27 17:10:39 crc kubenswrapper[5049]: I0127 17:10:39.964366 5049 generic.go:334] "Generic (PLEG): container finished" podID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerID="2ab6a95715b33a663d3b0021f84c874b36d114bd1898b1680b66638ece38b63e" exitCode=0 Jan 27 17:10:39 crc kubenswrapper[5049]: I0127 17:10:39.964463 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" event={"ID":"16b53c32-6621-4bb5-b6c7-1b929414dd8c","Type":"ContainerDied","Data":"2ab6a95715b33a663d3b0021f84c874b36d114bd1898b1680b66638ece38b63e"} Jan 27 17:10:40 crc kubenswrapper[5049]: I0127 17:10:40.970369 5049 generic.go:334] "Generic (PLEG): container finished" podID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerID="8c49cdfc7cc35666ae94aa7adb04c2034da8dc38a114a294512aabd2e774de4f" exitCode=0 Jan 27 17:10:40 crc kubenswrapper[5049]: I0127 17:10:40.970455 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" event={"ID":"16b53c32-6621-4bb5-b6c7-1b929414dd8c","Type":"ContainerDied","Data":"8c49cdfc7cc35666ae94aa7adb04c2034da8dc38a114a294512aabd2e774de4f"} Jan 27 17:10:40 crc kubenswrapper[5049]: I0127 17:10:40.972191 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q49z" event={"ID":"86011ac7-6366-423a-9b2b-2d217622df13","Type":"ContainerStarted","Data":"efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2"} Jan 27 17:10:41 crc kubenswrapper[5049]: I0127 17:10:41.981345 5049 generic.go:334] "Generic (PLEG): container finished" podID="86011ac7-6366-423a-9b2b-2d217622df13" containerID="efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2" exitCode=0 Jan 27 17:10:41 crc kubenswrapper[5049]: I0127 17:10:41.981560 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q49z" event={"ID":"86011ac7-6366-423a-9b2b-2d217622df13","Type":"ContainerDied","Data":"efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2"} Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.263129 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.326255 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-bundle\") pod \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.326306 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-util\") pod \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.326466 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ss6k\" (UniqueName: \"kubernetes.io/projected/16b53c32-6621-4bb5-b6c7-1b929414dd8c-kube-api-access-4ss6k\") pod \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\" (UID: \"16b53c32-6621-4bb5-b6c7-1b929414dd8c\") " Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.326998 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-bundle" (OuterVolumeSpecName: "bundle") pod "16b53c32-6621-4bb5-b6c7-1b929414dd8c" (UID: "16b53c32-6621-4bb5-b6c7-1b929414dd8c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.334650 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16b53c32-6621-4bb5-b6c7-1b929414dd8c-kube-api-access-4ss6k" (OuterVolumeSpecName: "kube-api-access-4ss6k") pod "16b53c32-6621-4bb5-b6c7-1b929414dd8c" (UID: "16b53c32-6621-4bb5-b6c7-1b929414dd8c"). InnerVolumeSpecName "kube-api-access-4ss6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.340633 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-util" (OuterVolumeSpecName: "util") pod "16b53c32-6621-4bb5-b6c7-1b929414dd8c" (UID: "16b53c32-6621-4bb5-b6c7-1b929414dd8c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.427597 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ss6k\" (UniqueName: \"kubernetes.io/projected/16b53c32-6621-4bb5-b6c7-1b929414dd8c-kube-api-access-4ss6k\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.427631 5049 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.427642 5049 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16b53c32-6621-4bb5-b6c7-1b929414dd8c-util\") on node \"crc\" DevicePath \"\"" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.990853 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.990848 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5" event={"ID":"16b53c32-6621-4bb5-b6c7-1b929414dd8c","Type":"ContainerDied","Data":"f24f6b55c5a10baddd85b01164850970bd7b9648731c3ed8cc88fb6ac1833084"} Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.991361 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f24f6b55c5a10baddd85b01164850970bd7b9648731c3ed8cc88fb6ac1833084" Jan 27 17:10:42 crc kubenswrapper[5049]: I0127 17:10:42.993913 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q49z" event={"ID":"86011ac7-6366-423a-9b2b-2d217622df13","Type":"ContainerStarted","Data":"4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e"} Jan 27 17:10:43 crc kubenswrapper[5049]: I0127 17:10:43.317173 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7q49z" podStartSLOduration=2.845941788 podStartE2EDuration="5.317153066s" podCreationTimestamp="2026-01-27 17:10:38 +0000 UTC" firstStartedPulling="2026-01-27 17:10:39.962388519 +0000 UTC m=+815.061362108" lastFinishedPulling="2026-01-27 17:10:42.433599827 +0000 UTC m=+817.532573386" observedRunningTime="2026-01-27 17:10:43.027206115 +0000 UTC m=+818.126179724" watchObservedRunningTime="2026-01-27 17:10:43.317153066 +0000 UTC m=+818.416126625" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.678012 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ww57s"] Jan 27 17:10:46 crc kubenswrapper[5049]: E0127 17:10:46.678255 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerName="util" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.678269 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerName="util" Jan 27 17:10:46 crc kubenswrapper[5049]: E0127 17:10:46.678286 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerName="pull" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.678295 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerName="pull" Jan 27 17:10:46 crc kubenswrapper[5049]: E0127 17:10:46.678310 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerName="extract" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.678320 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerName="extract" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.678463 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="16b53c32-6621-4bb5-b6c7-1b929414dd8c" containerName="extract" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.678970 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ww57s" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.681063 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-thfxl" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.681066 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.681124 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.694642 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ww57s"] Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.786429 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2sh\" (UniqueName: \"kubernetes.io/projected/c9167faa-e84f-454f-9628-071acd6f4e99-kube-api-access-hk2sh\") pod \"nmstate-operator-646758c888-ww57s\" (UID: \"c9167faa-e84f-454f-9628-071acd6f4e99\") " pod="openshift-nmstate/nmstate-operator-646758c888-ww57s" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.888073 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2sh\" (UniqueName: \"kubernetes.io/projected/c9167faa-e84f-454f-9628-071acd6f4e99-kube-api-access-hk2sh\") pod \"nmstate-operator-646758c888-ww57s\" (UID: \"c9167faa-e84f-454f-9628-071acd6f4e99\") " pod="openshift-nmstate/nmstate-operator-646758c888-ww57s" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.915395 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2sh\" (UniqueName: \"kubernetes.io/projected/c9167faa-e84f-454f-9628-071acd6f4e99-kube-api-access-hk2sh\") pod \"nmstate-operator-646758c888-ww57s\" (UID: \"c9167faa-e84f-454f-9628-071acd6f4e99\") " pod="openshift-nmstate/nmstate-operator-646758c888-ww57s" Jan 27 17:10:46 crc kubenswrapper[5049]: I0127 17:10:46.995541 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ww57s" Jan 27 17:10:47 crc kubenswrapper[5049]: I0127 17:10:47.448220 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ww57s"] Jan 27 17:10:47 crc kubenswrapper[5049]: W0127 17:10:47.454076 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9167faa_e84f_454f_9628_071acd6f4e99.slice/crio-45befae14fdf97a7d30acea3e8255be9f1cdf5b0029f2f58dd6cc7fce38d6ccc WatchSource:0}: Error finding container 45befae14fdf97a7d30acea3e8255be9f1cdf5b0029f2f58dd6cc7fce38d6ccc: Status 404 returned error can't find the container with id 45befae14fdf97a7d30acea3e8255be9f1cdf5b0029f2f58dd6cc7fce38d6ccc Jan 27 17:10:47 crc kubenswrapper[5049]: I0127 17:10:47.781887 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:10:47 crc kubenswrapper[5049]: I0127 17:10:47.781948 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:10:48 crc kubenswrapper[5049]: I0127 17:10:48.024114 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ww57s" event={"ID":"c9167faa-e84f-454f-9628-071acd6f4e99","Type":"ContainerStarted","Data":"45befae14fdf97a7d30acea3e8255be9f1cdf5b0029f2f58dd6cc7fce38d6ccc"} Jan 27 17:10:48 crc kubenswrapper[5049]: I0127 17:10:48.868982 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:48 crc kubenswrapper[5049]: I0127 17:10:48.869645 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:49 crc kubenswrapper[5049]: I0127 17:10:49.916967 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7q49z" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="registry-server" probeResult="failure" output=< Jan 27 17:10:49 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 17:10:49 crc kubenswrapper[5049]: > Jan 27 17:10:50 crc kubenswrapper[5049]: I0127 17:10:50.040451 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ww57s" event={"ID":"c9167faa-e84f-454f-9628-071acd6f4e99","Type":"ContainerStarted","Data":"f38f0537d1b251dc2862da9b96056fc988c6e549556caf27ca1b691f1fde8b95"} Jan 27 17:10:50 crc kubenswrapper[5049]: I0127 17:10:50.064495 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-ww57s" podStartSLOduration=1.757942006 podStartE2EDuration="4.064477764s" podCreationTimestamp="2026-01-27 17:10:46 +0000 UTC" firstStartedPulling="2026-01-27 17:10:47.455620757 +0000 UTC m=+822.554594296" lastFinishedPulling="2026-01-27 17:10:49.762156485 +0000 UTC m=+824.861130054" observedRunningTime="2026-01-27 17:10:50.062441305 +0000 UTC m=+825.161414924" watchObservedRunningTime="2026-01-27 17:10:50.064477764 +0000 UTC m=+825.163451323" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.541388 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k94tc"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.542968 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.545790 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ftp6f" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.564992 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k94tc"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.581770 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gc6fh"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.582625 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.584542 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.585662 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.587323 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.593074 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.610430 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-dbus-socket\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.610483 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-ovs-socket\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.610506 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-nmstate-lock\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.610539 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k82p8\" (UniqueName: \"kubernetes.io/projected/6058058d-b942-4659-b721-80eb7add600d-kube-api-access-k82p8\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.671243 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.672095 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.674430 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.674594 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cz8hp" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.674728 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.688340 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.719022 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhf25\" (UniqueName: \"kubernetes.io/projected/9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f-kube-api-access-fhf25\") pod \"nmstate-webhook-8474b5b9d8-wphmd\" (UID: \"9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.719437 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-dbus-socket\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.719259 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-dbus-socket\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.719648 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-ovs-socket\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.719694 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8bx\" (UniqueName: \"kubernetes.io/projected/419ed877-28f2-4a53-87d2-51c31c16a385-kube-api-access-dg8bx\") pod \"nmstate-metrics-54757c584b-k94tc\" (UID: \"419ed877-28f2-4a53-87d2-51c31c16a385\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.719714 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-nmstate-lock\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.719741 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wphmd\" (UID: \"9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.720025 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k82p8\" (UniqueName: \"kubernetes.io/projected/6058058d-b942-4659-b721-80eb7add600d-kube-api-access-k82p8\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.720948 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-ovs-socket\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.720975 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6058058d-b942-4659-b721-80eb7add600d-nmstate-lock\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.738034 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k82p8\" (UniqueName: \"kubernetes.io/projected/6058058d-b942-4659-b721-80eb7add600d-kube-api-access-k82p8\") pod \"nmstate-handler-gc6fh\" (UID: \"6058058d-b942-4659-b721-80eb7add600d\") " pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.822458 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.822516 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mcml\" (UniqueName: \"kubernetes.io/projected/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-kube-api-access-5mcml\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.822582 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhf25\" (UniqueName: \"kubernetes.io/projected/9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f-kube-api-access-fhf25\") pod \"nmstate-webhook-8474b5b9d8-wphmd\" (UID: \"9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.822622 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg8bx\" (UniqueName: \"kubernetes.io/projected/419ed877-28f2-4a53-87d2-51c31c16a385-kube-api-access-dg8bx\") pod \"nmstate-metrics-54757c584b-k94tc\" (UID: \"419ed877-28f2-4a53-87d2-51c31c16a385\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.822649 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wphmd\" (UID: \"9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.822687 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.827355 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wphmd\" (UID: \"9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.847592 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhf25\" (UniqueName: \"kubernetes.io/projected/9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f-kube-api-access-fhf25\") pod \"nmstate-webhook-8474b5b9d8-wphmd\" (UID: \"9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.859478 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg8bx\" (UniqueName: \"kubernetes.io/projected/419ed877-28f2-4a53-87d2-51c31c16a385-kube-api-access-dg8bx\") pod \"nmstate-metrics-54757c584b-k94tc\" (UID: \"419ed877-28f2-4a53-87d2-51c31c16a385\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.864238 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.898938 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.912093 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.925055 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.925109 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mcml\" (UniqueName: \"kubernetes.io/projected/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-kube-api-access-5mcml\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.925157 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.925917 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.930963 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-78789f6f9d-h896p"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.931404 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.931722 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.949787 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78789f6f9d-h896p"] Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.950307 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mcml\" (UniqueName: \"kubernetes.io/projected/bf7a2b61-35ff-47f7-b2fa-65232d56e55e-kube-api-access-5mcml\") pod \"nmstate-console-plugin-7754f76f8b-nhtv4\" (UID: \"bf7a2b61-35ff-47f7-b2fa-65232d56e55e\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:56 crc kubenswrapper[5049]: I0127 17:10:56.988242 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.026203 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-oauth-serving-cert\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.026284 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3b41714-58ab-4b03-b22c-3b062668677e-console-oauth-config\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.026607 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-trusted-ca-bundle\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.026632 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3b41714-58ab-4b03-b22c-3b062668677e-console-serving-cert\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.026667 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7szl\" (UniqueName: \"kubernetes.io/projected/a3b41714-58ab-4b03-b22c-3b062668677e-kube-api-access-v7szl\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.026861 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-service-ca\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.026992 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-console-config\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.083825 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gc6fh" event={"ID":"6058058d-b942-4659-b721-80eb7add600d","Type":"ContainerStarted","Data":"1963e442f1cf54a9b652bcf621605ccb7d4b511851b988220ad8fb3e5247d278"} Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.108986 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-k94tc"] Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.127561 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3b41714-58ab-4b03-b22c-3b062668677e-console-oauth-config\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.127602 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-trusted-ca-bundle\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.127622 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3b41714-58ab-4b03-b22c-3b062668677e-console-serving-cert\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.127636 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7szl\" (UniqueName: \"kubernetes.io/projected/a3b41714-58ab-4b03-b22c-3b062668677e-kube-api-access-v7szl\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.127659 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-service-ca\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.127700 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-console-config\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.127814 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-oauth-serving-cert\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.128661 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-console-config\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.128732 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-service-ca\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.128881 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-trusted-ca-bundle\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.128947 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3b41714-58ab-4b03-b22c-3b062668677e-oauth-serving-cert\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.131990 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3b41714-58ab-4b03-b22c-3b062668677e-console-oauth-config\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.132938 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3b41714-58ab-4b03-b22c-3b062668677e-console-serving-cert\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.142383 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7szl\" (UniqueName: \"kubernetes.io/projected/a3b41714-58ab-4b03-b22c-3b062668677e-kube-api-access-v7szl\") pod \"console-78789f6f9d-h896p\" (UID: \"a3b41714-58ab-4b03-b22c-3b062668677e\") " pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.261706 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.376287 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd"] Jan 27 17:10:57 crc kubenswrapper[5049]: W0127 17:10:57.391765 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c3a3ed4_fabe_48cb_86fa_b7474a8e9b8f.slice/crio-2dd0317962d78b15c94dab3045ac81b3def296c85da699723332443257d77710 WatchSource:0}: Error finding container 2dd0317962d78b15c94dab3045ac81b3def296c85da699723332443257d77710: Status 404 returned error can't find the container with id 2dd0317962d78b15c94dab3045ac81b3def296c85da699723332443257d77710 Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.419949 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4"] Jan 27 17:10:57 crc kubenswrapper[5049]: W0127 17:10:57.427460 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf7a2b61_35ff_47f7_b2fa_65232d56e55e.slice/crio-e3e986e93ee20f67a003d6f9d5e1130346fcebe081554653f20a49118697193a WatchSource:0}: Error finding container e3e986e93ee20f67a003d6f9d5e1130346fcebe081554653f20a49118697193a: Status 404 returned error can't find the container with id e3e986e93ee20f67a003d6f9d5e1130346fcebe081554653f20a49118697193a Jan 27 17:10:57 crc kubenswrapper[5049]: I0127 17:10:57.476146 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-78789f6f9d-h896p"] Jan 27 17:10:57 crc kubenswrapper[5049]: W0127 17:10:57.480499 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3b41714_58ab_4b03_b22c_3b062668677e.slice/crio-86ed026e4b7363857df96dac3d40be774851a4f0199b5f373d97e1d6c1ff4ba4 WatchSource:0}: Error finding container 86ed026e4b7363857df96dac3d40be774851a4f0199b5f373d97e1d6c1ff4ba4: Status 404 returned error can't find the container with id 86ed026e4b7363857df96dac3d40be774851a4f0199b5f373d97e1d6c1ff4ba4 Jan 27 17:10:58 crc kubenswrapper[5049]: I0127 17:10:58.089246 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" event={"ID":"bf7a2b61-35ff-47f7-b2fa-65232d56e55e","Type":"ContainerStarted","Data":"e3e986e93ee20f67a003d6f9d5e1130346fcebe081554653f20a49118697193a"} Jan 27 17:10:58 crc kubenswrapper[5049]: I0127 17:10:58.090236 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" event={"ID":"419ed877-28f2-4a53-87d2-51c31c16a385","Type":"ContainerStarted","Data":"05c965d281d71fbd288d8284ceaa5dc955242e5e6d0322b6fbddccc352c31b57"} Jan 27 17:10:58 crc kubenswrapper[5049]: I0127 17:10:58.091741 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78789f6f9d-h896p" event={"ID":"a3b41714-58ab-4b03-b22c-3b062668677e","Type":"ContainerStarted","Data":"031d674064d59fafacc6893d8cb7e31d9a8b1a8b62f6f4a3c667a9a019a76aee"} Jan 27 17:10:58 crc kubenswrapper[5049]: I0127 17:10:58.091761 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-78789f6f9d-h896p" event={"ID":"a3b41714-58ab-4b03-b22c-3b062668677e","Type":"ContainerStarted","Data":"86ed026e4b7363857df96dac3d40be774851a4f0199b5f373d97e1d6c1ff4ba4"} Jan 27 17:10:58 crc kubenswrapper[5049]: I0127 17:10:58.092838 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" event={"ID":"9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f","Type":"ContainerStarted","Data":"2dd0317962d78b15c94dab3045ac81b3def296c85da699723332443257d77710"} Jan 27 17:10:58 crc kubenswrapper[5049]: I0127 17:10:58.114934 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-78789f6f9d-h896p" podStartSLOduration=2.114916516 podStartE2EDuration="2.114916516s" podCreationTimestamp="2026-01-27 17:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:10:58.110834808 +0000 UTC m=+833.209808397" watchObservedRunningTime="2026-01-27 17:10:58.114916516 +0000 UTC m=+833.213890065" Jan 27 17:10:58 crc kubenswrapper[5049]: I0127 17:10:58.911561 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:58 crc kubenswrapper[5049]: I0127 17:10:58.950855 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:10:59 crc kubenswrapper[5049]: I0127 17:10:59.141381 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7q49z"] Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.106917 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" event={"ID":"419ed877-28f2-4a53-87d2-51c31c16a385","Type":"ContainerStarted","Data":"bd2a044feef746f9afbd1dc43a14ad2b1d6c2677baacfacab765522c8ecf1ee3"} Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.108467 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gc6fh" event={"ID":"6058058d-b942-4659-b721-80eb7add600d","Type":"ContainerStarted","Data":"eb382da0f20323e2b3d0bf94ebf8efcb646595de00ed4cb03e95d18cbe9ea5ad"} Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.108643 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.110499 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" event={"ID":"9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f","Type":"ContainerStarted","Data":"ee21ba7a59c7a274142b6fbd90252d9deb7967cc97adc67b68974d5541959393"} Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.110550 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7q49z" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="registry-server" containerID="cri-o://4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e" gracePeriod=2 Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.110773 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.130067 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gc6fh" podStartSLOduration=2.003947091 podStartE2EDuration="4.13005099s" podCreationTimestamp="2026-01-27 17:10:56 +0000 UTC" firstStartedPulling="2026-01-27 17:10:56.965415502 +0000 UTC m=+832.064389051" lastFinishedPulling="2026-01-27 17:10:59.091519401 +0000 UTC m=+834.190492950" observedRunningTime="2026-01-27 17:11:00.125313383 +0000 UTC m=+835.224287022" watchObservedRunningTime="2026-01-27 17:11:00.13005099 +0000 UTC m=+835.229024539" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.196835 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" podStartSLOduration=2.499000096 podStartE2EDuration="4.196817435s" podCreationTimestamp="2026-01-27 17:10:56 +0000 UTC" firstStartedPulling="2026-01-27 17:10:57.396341898 +0000 UTC m=+832.495315447" lastFinishedPulling="2026-01-27 17:10:59.094159237 +0000 UTC m=+834.193132786" observedRunningTime="2026-01-27 17:11:00.196108104 +0000 UTC m=+835.295081733" watchObservedRunningTime="2026-01-27 17:11:00.196817435 +0000 UTC m=+835.295790974" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.448246 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.596707 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-utilities\") pod \"86011ac7-6366-423a-9b2b-2d217622df13\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.596762 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87djb\" (UniqueName: \"kubernetes.io/projected/86011ac7-6366-423a-9b2b-2d217622df13-kube-api-access-87djb\") pod \"86011ac7-6366-423a-9b2b-2d217622df13\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.596926 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-catalog-content\") pod \"86011ac7-6366-423a-9b2b-2d217622df13\" (UID: \"86011ac7-6366-423a-9b2b-2d217622df13\") " Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.597804 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-utilities" (OuterVolumeSpecName: "utilities") pod "86011ac7-6366-423a-9b2b-2d217622df13" (UID: "86011ac7-6366-423a-9b2b-2d217622df13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.614723 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86011ac7-6366-423a-9b2b-2d217622df13-kube-api-access-87djb" (OuterVolumeSpecName: "kube-api-access-87djb") pod "86011ac7-6366-423a-9b2b-2d217622df13" (UID: "86011ac7-6366-423a-9b2b-2d217622df13"). InnerVolumeSpecName "kube-api-access-87djb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.698884 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.698918 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87djb\" (UniqueName: \"kubernetes.io/projected/86011ac7-6366-423a-9b2b-2d217622df13-kube-api-access-87djb\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.733899 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86011ac7-6366-423a-9b2b-2d217622df13" (UID: "86011ac7-6366-423a-9b2b-2d217622df13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:11:00 crc kubenswrapper[5049]: I0127 17:11:00.799720 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86011ac7-6366-423a-9b2b-2d217622df13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.117376 5049 generic.go:334] "Generic (PLEG): container finished" podID="86011ac7-6366-423a-9b2b-2d217622df13" containerID="4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e" exitCode=0 Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.117460 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q49z" event={"ID":"86011ac7-6366-423a-9b2b-2d217622df13","Type":"ContainerDied","Data":"4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e"} Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.117513 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q49z" event={"ID":"86011ac7-6366-423a-9b2b-2d217622df13","Type":"ContainerDied","Data":"3b92fb2b1141195ba86f15ec8e55858d464b6c14e317d35bcae0c5b626ba72c6"} Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.117532 5049 scope.go:117] "RemoveContainer" containerID="4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.117539 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7q49z" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.120177 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" event={"ID":"bf7a2b61-35ff-47f7-b2fa-65232d56e55e","Type":"ContainerStarted","Data":"439f581d66a3ffe433b3128087254b990f285c6728d86d6d62e71a8da86ef223"} Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.149717 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nhtv4" podStartSLOduration=2.579078485 podStartE2EDuration="5.149689972s" podCreationTimestamp="2026-01-27 17:10:56 +0000 UTC" firstStartedPulling="2026-01-27 17:10:57.431312971 +0000 UTC m=+832.530286520" lastFinishedPulling="2026-01-27 17:11:00.001924458 +0000 UTC m=+835.100898007" observedRunningTime="2026-01-27 17:11:01.13996415 +0000 UTC m=+836.238937689" watchObservedRunningTime="2026-01-27 17:11:01.149689972 +0000 UTC m=+836.248663521" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.173980 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7q49z"] Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.180122 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7q49z"] Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.380968 5049 scope.go:117] "RemoveContainer" containerID="efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.399054 5049 scope.go:117] "RemoveContainer" containerID="9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.451694 5049 scope.go:117] "RemoveContainer" containerID="4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e" Jan 27 17:11:01 crc kubenswrapper[5049]: E0127 17:11:01.452158 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e\": container with ID starting with 4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e not found: ID does not exist" containerID="4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.452207 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e"} err="failed to get container status \"4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e\": rpc error: code = NotFound desc = could not find container \"4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e\": container with ID starting with 4d3467d384f7b61417b4817faf80ce21a364fc00bba2988ea07477822c73163e not found: ID does not exist" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.452237 5049 scope.go:117] "RemoveContainer" containerID="efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2" Jan 27 17:11:01 crc kubenswrapper[5049]: E0127 17:11:01.452519 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2\": container with ID starting with efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2 not found: ID does not exist" containerID="efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.452545 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2"} err="failed to get container status \"efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2\": rpc error: code = NotFound desc = could not find container \"efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2\": container with ID starting with efee19cc7519ec9984abdbc1b99a9027b6fecba83b3123e917a27987197437f2 not found: ID does not exist" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.452559 5049 scope.go:117] "RemoveContainer" containerID="9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6" Jan 27 17:11:01 crc kubenswrapper[5049]: E0127 17:11:01.452770 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6\": container with ID starting with 9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6 not found: ID does not exist" containerID="9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.452786 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6"} err="failed to get container status \"9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6\": rpc error: code = NotFound desc = could not find container \"9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6\": container with ID starting with 9215d501e61ac9fca715160702da115c229830572b28f62f4bac749ba5a38aa6 not found: ID does not exist" Jan 27 17:11:01 crc kubenswrapper[5049]: I0127 17:11:01.655311 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86011ac7-6366-423a-9b2b-2d217622df13" path="/var/lib/kubelet/pods/86011ac7-6366-423a-9b2b-2d217622df13/volumes" Jan 27 17:11:02 crc kubenswrapper[5049]: I0127 17:11:02.131104 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" event={"ID":"419ed877-28f2-4a53-87d2-51c31c16a385","Type":"ContainerStarted","Data":"629fe37c2fb644e68a08be89cdf7fed72987e412e1a53b5ea0b7c81ba2c9767e"} Jan 27 17:11:06 crc kubenswrapper[5049]: I0127 17:11:06.936845 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gc6fh" Jan 27 17:11:06 crc kubenswrapper[5049]: I0127 17:11:06.959763 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-k94tc" podStartSLOduration=6.619741933 podStartE2EDuration="10.959741364s" podCreationTimestamp="2026-01-27 17:10:56 +0000 UTC" firstStartedPulling="2026-01-27 17:10:57.120579548 +0000 UTC m=+832.219553097" lastFinishedPulling="2026-01-27 17:11:01.460578959 +0000 UTC m=+836.559552528" observedRunningTime="2026-01-27 17:11:02.154216225 +0000 UTC m=+837.253189844" watchObservedRunningTime="2026-01-27 17:11:06.959741364 +0000 UTC m=+842.058714923" Jan 27 17:11:07 crc kubenswrapper[5049]: I0127 17:11:07.262466 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:11:07 crc kubenswrapper[5049]: I0127 17:11:07.262530 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:11:07 crc kubenswrapper[5049]: I0127 17:11:07.271434 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:11:08 crc kubenswrapper[5049]: I0127 17:11:08.177896 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-78789f6f9d-h896p" Jan 27 17:11:08 crc kubenswrapper[5049]: I0127 17:11:08.231854 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qnqlr"] Jan 27 17:11:16 crc kubenswrapper[5049]: I0127 17:11:16.921237 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wphmd" Jan 27 17:11:17 crc kubenswrapper[5049]: I0127 17:11:17.781142 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:11:17 crc kubenswrapper[5049]: I0127 17:11:17.781221 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:11:17 crc kubenswrapper[5049]: I0127 17:11:17.781283 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:11:17 crc kubenswrapper[5049]: I0127 17:11:17.781914 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1aa9223cd763227032c3196c83813fd302f48bd7085cca520f2fac4b65a3aa4"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:11:17 crc kubenswrapper[5049]: I0127 17:11:17.782006 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://d1aa9223cd763227032c3196c83813fd302f48bd7085cca520f2fac4b65a3aa4" gracePeriod=600 Jan 27 17:11:18 crc kubenswrapper[5049]: I0127 17:11:18.236499 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="d1aa9223cd763227032c3196c83813fd302f48bd7085cca520f2fac4b65a3aa4" exitCode=0 Jan 27 17:11:18 crc kubenswrapper[5049]: I0127 17:11:18.236576 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"d1aa9223cd763227032c3196c83813fd302f48bd7085cca520f2fac4b65a3aa4"} Jan 27 17:11:18 crc kubenswrapper[5049]: I0127 17:11:18.248688 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"6ad01eb278d8a66889a11fa84f093b411a8a38e169a31c62b60f821c2f9f05b1"} Jan 27 17:11:18 crc kubenswrapper[5049]: I0127 17:11:18.248710 5049 scope.go:117] "RemoveContainer" containerID="419d0791d576cddc4dba7f8228b001c562014997ef6e0484d641d20bb31d00ea" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.628713 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t"] Jan 27 17:11:30 crc kubenswrapper[5049]: E0127 17:11:30.630641 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="extract-utilities" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.630758 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="extract-utilities" Jan 27 17:11:30 crc kubenswrapper[5049]: E0127 17:11:30.630836 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="extract-content" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.630904 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="extract-content" Jan 27 17:11:30 crc kubenswrapper[5049]: E0127 17:11:30.630982 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="registry-server" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.631064 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="registry-server" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.631286 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="86011ac7-6366-423a-9b2b-2d217622df13" containerName="registry-server" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.632319 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.635347 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.636481 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t"] Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.803306 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjz8q\" (UniqueName: \"kubernetes.io/projected/145acd4d-d458-4ea3-9abb-f5a58976ecf1-kube-api-access-mjz8q\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.803373 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.803472 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.905159 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.905732 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.905794 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjz8q\" (UniqueName: \"kubernetes.io/projected/145acd4d-d458-4ea3-9abb-f5a58976ecf1-kube-api-access-mjz8q\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.905944 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.906422 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.930787 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjz8q\" (UniqueName: \"kubernetes.io/projected/145acd4d-d458-4ea3-9abb-f5a58976ecf1-kube-api-access-mjz8q\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:30 crc kubenswrapper[5049]: I0127 17:11:30.951358 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:31 crc kubenswrapper[5049]: I0127 17:11:31.192430 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t"] Jan 27 17:11:31 crc kubenswrapper[5049]: I0127 17:11:31.358221 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" event={"ID":"145acd4d-d458-4ea3-9abb-f5a58976ecf1","Type":"ContainerStarted","Data":"b508c930269002d8c4d753f90e09e5f90a9f595ef17fb05a89c89d9a7d0878b2"} Jan 27 17:11:31 crc kubenswrapper[5049]: I0127 17:11:31.358271 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" event={"ID":"145acd4d-d458-4ea3-9abb-f5a58976ecf1","Type":"ContainerStarted","Data":"63316fe667e4d6be5b766a2636be89218d825edec10d38e5751017c135e11d10"} Jan 27 17:11:32 crc kubenswrapper[5049]: I0127 17:11:32.367452 5049 generic.go:334] "Generic (PLEG): container finished" podID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerID="b508c930269002d8c4d753f90e09e5f90a9f595ef17fb05a89c89d9a7d0878b2" exitCode=0 Jan 27 17:11:32 crc kubenswrapper[5049]: I0127 17:11:32.368341 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" event={"ID":"145acd4d-d458-4ea3-9abb-f5a58976ecf1","Type":"ContainerDied","Data":"b508c930269002d8c4d753f90e09e5f90a9f595ef17fb05a89c89d9a7d0878b2"} Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.273917 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qnqlr" podUID="ed96c1d9-55f9-48df-970b-2b1e71a90633" containerName="console" containerID="cri-o://01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2" gracePeriod=15 Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.682590 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qnqlr_ed96c1d9-55f9-48df-970b-2b1e71a90633/console/0.log" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.682967 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.844437 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-service-ca\") pod \"ed96c1d9-55f9-48df-970b-2b1e71a90633\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.844496 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-trusted-ca-bundle\") pod \"ed96c1d9-55f9-48df-970b-2b1e71a90633\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.844523 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-serving-cert\") pod \"ed96c1d9-55f9-48df-970b-2b1e71a90633\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.844574 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86zl8\" (UniqueName: \"kubernetes.io/projected/ed96c1d9-55f9-48df-970b-2b1e71a90633-kube-api-access-86zl8\") pod \"ed96c1d9-55f9-48df-970b-2b1e71a90633\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.844639 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-config\") pod \"ed96c1d9-55f9-48df-970b-2b1e71a90633\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.844668 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-oauth-config\") pod \"ed96c1d9-55f9-48df-970b-2b1e71a90633\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.844789 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-oauth-serving-cert\") pod \"ed96c1d9-55f9-48df-970b-2b1e71a90633\" (UID: \"ed96c1d9-55f9-48df-970b-2b1e71a90633\") " Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.845738 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-service-ca" (OuterVolumeSpecName: "service-ca") pod "ed96c1d9-55f9-48df-970b-2b1e71a90633" (UID: "ed96c1d9-55f9-48df-970b-2b1e71a90633"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.845834 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-config" (OuterVolumeSpecName: "console-config") pod "ed96c1d9-55f9-48df-970b-2b1e71a90633" (UID: "ed96c1d9-55f9-48df-970b-2b1e71a90633"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.845946 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ed96c1d9-55f9-48df-970b-2b1e71a90633" (UID: "ed96c1d9-55f9-48df-970b-2b1e71a90633"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.845967 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ed96c1d9-55f9-48df-970b-2b1e71a90633" (UID: "ed96c1d9-55f9-48df-970b-2b1e71a90633"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.863939 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed96c1d9-55f9-48df-970b-2b1e71a90633-kube-api-access-86zl8" (OuterVolumeSpecName: "kube-api-access-86zl8") pod "ed96c1d9-55f9-48df-970b-2b1e71a90633" (UID: "ed96c1d9-55f9-48df-970b-2b1e71a90633"). InnerVolumeSpecName "kube-api-access-86zl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.864028 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ed96c1d9-55f9-48df-970b-2b1e71a90633" (UID: "ed96c1d9-55f9-48df-970b-2b1e71a90633"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.864976 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ed96c1d9-55f9-48df-970b-2b1e71a90633" (UID: "ed96c1d9-55f9-48df-970b-2b1e71a90633"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.946603 5049 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.946668 5049 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.946716 5049 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.946734 5049 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.946756 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86zl8\" (UniqueName: \"kubernetes.io/projected/ed96c1d9-55f9-48df-970b-2b1e71a90633-kube-api-access-86zl8\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.946776 5049 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:33 crc kubenswrapper[5049]: I0127 17:11:33.946793 5049 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed96c1d9-55f9-48df-970b-2b1e71a90633-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.388365 5049 generic.go:334] "Generic (PLEG): container finished" podID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerID="f606c1951b7ab0050fb497e4720ec5cc7a1134cf597771e2765f5252892080fe" exitCode=0 Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.388440 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" event={"ID":"145acd4d-d458-4ea3-9abb-f5a58976ecf1","Type":"ContainerDied","Data":"f606c1951b7ab0050fb497e4720ec5cc7a1134cf597771e2765f5252892080fe"} Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.392558 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qnqlr_ed96c1d9-55f9-48df-970b-2b1e71a90633/console/0.log" Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.392649 5049 generic.go:334] "Generic (PLEG): container finished" podID="ed96c1d9-55f9-48df-970b-2b1e71a90633" containerID="01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2" exitCode=2 Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.392734 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qnqlr" event={"ID":"ed96c1d9-55f9-48df-970b-2b1e71a90633","Type":"ContainerDied","Data":"01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2"} Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.392780 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qnqlr" event={"ID":"ed96c1d9-55f9-48df-970b-2b1e71a90633","Type":"ContainerDied","Data":"73784ccfff0950e8f15bad3bcf6f947969eb466dacab45b815dc342fa7a88e4d"} Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.392815 5049 scope.go:117] "RemoveContainer" containerID="01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2" Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.393017 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qnqlr" Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.423194 5049 scope.go:117] "RemoveContainer" containerID="01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2" Jan 27 17:11:34 crc kubenswrapper[5049]: E0127 17:11:34.423614 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2\": container with ID starting with 01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2 not found: ID does not exist" containerID="01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2" Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.423675 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2"} err="failed to get container status \"01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2\": rpc error: code = NotFound desc = could not find container \"01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2\": container with ID starting with 01e0060b09da86f1c4b6942660c0d6e52297e911f35606a30d47e1198b89dda2 not found: ID does not exist" Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.500210 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qnqlr"] Jan 27 17:11:34 crc kubenswrapper[5049]: I0127 17:11:34.505927 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qnqlr"] Jan 27 17:11:35 crc kubenswrapper[5049]: I0127 17:11:35.402205 5049 generic.go:334] "Generic (PLEG): container finished" podID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerID="8bafd8fb71082484992d8a57353684b3104f6fb3a143720077923bef515c8cd5" exitCode=0 Jan 27 17:11:35 crc kubenswrapper[5049]: I0127 17:11:35.402509 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" event={"ID":"145acd4d-d458-4ea3-9abb-f5a58976ecf1","Type":"ContainerDied","Data":"8bafd8fb71082484992d8a57353684b3104f6fb3a143720077923bef515c8cd5"} Jan 27 17:11:35 crc kubenswrapper[5049]: I0127 17:11:35.671041 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed96c1d9-55f9-48df-970b-2b1e71a90633" path="/var/lib/kubelet/pods/ed96c1d9-55f9-48df-970b-2b1e71a90633/volumes" Jan 27 17:11:36 crc kubenswrapper[5049]: I0127 17:11:36.694591 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:36 crc kubenswrapper[5049]: I0127 17:11:36.786405 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjz8q\" (UniqueName: \"kubernetes.io/projected/145acd4d-d458-4ea3-9abb-f5a58976ecf1-kube-api-access-mjz8q\") pod \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " Jan 27 17:11:36 crc kubenswrapper[5049]: I0127 17:11:36.786547 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-bundle\") pod \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " Jan 27 17:11:36 crc kubenswrapper[5049]: I0127 17:11:36.786645 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-util\") pod \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\" (UID: \"145acd4d-d458-4ea3-9abb-f5a58976ecf1\") " Jan 27 17:11:36 crc kubenswrapper[5049]: I0127 17:11:36.787531 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-bundle" (OuterVolumeSpecName: "bundle") pod "145acd4d-d458-4ea3-9abb-f5a58976ecf1" (UID: "145acd4d-d458-4ea3-9abb-f5a58976ecf1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:11:36 crc kubenswrapper[5049]: I0127 17:11:36.792047 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145acd4d-d458-4ea3-9abb-f5a58976ecf1-kube-api-access-mjz8q" (OuterVolumeSpecName: "kube-api-access-mjz8q") pod "145acd4d-d458-4ea3-9abb-f5a58976ecf1" (UID: "145acd4d-d458-4ea3-9abb-f5a58976ecf1"). InnerVolumeSpecName "kube-api-access-mjz8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:11:36 crc kubenswrapper[5049]: I0127 17:11:36.887730 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjz8q\" (UniqueName: \"kubernetes.io/projected/145acd4d-d458-4ea3-9abb-f5a58976ecf1-kube-api-access-mjz8q\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:36 crc kubenswrapper[5049]: I0127 17:11:36.887829 5049 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:37 crc kubenswrapper[5049]: I0127 17:11:37.019179 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-util" (OuterVolumeSpecName: "util") pod "145acd4d-d458-4ea3-9abb-f5a58976ecf1" (UID: "145acd4d-d458-4ea3-9abb-f5a58976ecf1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:11:37 crc kubenswrapper[5049]: I0127 17:11:37.089652 5049 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/145acd4d-d458-4ea3-9abb-f5a58976ecf1-util\") on node \"crc\" DevicePath \"\"" Jan 27 17:11:37 crc kubenswrapper[5049]: I0127 17:11:37.418746 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" event={"ID":"145acd4d-d458-4ea3-9abb-f5a58976ecf1","Type":"ContainerDied","Data":"63316fe667e4d6be5b766a2636be89218d825edec10d38e5751017c135e11d10"} Jan 27 17:11:37 crc kubenswrapper[5049]: I0127 17:11:37.419250 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63316fe667e4d6be5b766a2636be89218d825edec10d38e5751017c135e11d10" Jan 27 17:11:37 crc kubenswrapper[5049]: I0127 17:11:37.418818 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.748292 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr"] Jan 27 17:11:45 crc kubenswrapper[5049]: E0127 17:11:45.748897 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerName="pull" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.748909 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerName="pull" Jan 27 17:11:45 crc kubenswrapper[5049]: E0127 17:11:45.748919 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerName="extract" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.748925 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerName="extract" Jan 27 17:11:45 crc kubenswrapper[5049]: E0127 17:11:45.748933 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed96c1d9-55f9-48df-970b-2b1e71a90633" containerName="console" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.748939 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed96c1d9-55f9-48df-970b-2b1e71a90633" containerName="console" Jan 27 17:11:45 crc kubenswrapper[5049]: E0127 17:11:45.748946 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerName="util" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.748952 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerName="util" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.749055 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed96c1d9-55f9-48df-970b-2b1e71a90633" containerName="console" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.749066 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="145acd4d-d458-4ea3-9abb-f5a58976ecf1" containerName="extract" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.749457 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.752983 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.753199 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4gpsm" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.756712 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.756759 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.756759 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.773052 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr"] Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.836648 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j85t\" (UniqueName: \"kubernetes.io/projected/382b1716-fd63-460f-a84f-0c37f695d08f-kube-api-access-6j85t\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.836798 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/382b1716-fd63-460f-a84f-0c37f695d08f-webhook-cert\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.836887 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/382b1716-fd63-460f-a84f-0c37f695d08f-apiservice-cert\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.937782 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j85t\" (UniqueName: \"kubernetes.io/projected/382b1716-fd63-460f-a84f-0c37f695d08f-kube-api-access-6j85t\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.937834 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/382b1716-fd63-460f-a84f-0c37f695d08f-webhook-cert\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.937866 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/382b1716-fd63-460f-a84f-0c37f695d08f-apiservice-cert\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.944398 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/382b1716-fd63-460f-a84f-0c37f695d08f-apiservice-cert\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.944501 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/382b1716-fd63-460f-a84f-0c37f695d08f-webhook-cert\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.958439 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j85t\" (UniqueName: \"kubernetes.io/projected/382b1716-fd63-460f-a84f-0c37f695d08f-kube-api-access-6j85t\") pod \"metallb-operator-controller-manager-6779f5b7c7-sntlr\" (UID: \"382b1716-fd63-460f-a84f-0c37f695d08f\") " pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.975760 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp"] Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.976446 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.979271 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.979438 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-q2qhl" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.979634 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 17:11:45 crc kubenswrapper[5049]: I0127 17:11:45.992191 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp"] Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.067630 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.140421 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36d6d6cd-b037-4b8b-af74-52af62b08fb6-apiservice-cert\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.140808 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrcbd\" (UniqueName: \"kubernetes.io/projected/36d6d6cd-b037-4b8b-af74-52af62b08fb6-kube-api-access-vrcbd\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.140848 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36d6d6cd-b037-4b8b-af74-52af62b08fb6-webhook-cert\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.242847 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36d6d6cd-b037-4b8b-af74-52af62b08fb6-apiservice-cert\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.242899 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrcbd\" (UniqueName: \"kubernetes.io/projected/36d6d6cd-b037-4b8b-af74-52af62b08fb6-kube-api-access-vrcbd\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.242928 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36d6d6cd-b037-4b8b-af74-52af62b08fb6-webhook-cert\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.249937 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36d6d6cd-b037-4b8b-af74-52af62b08fb6-apiservice-cert\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.253964 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36d6d6cd-b037-4b8b-af74-52af62b08fb6-webhook-cert\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.261812 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrcbd\" (UniqueName: \"kubernetes.io/projected/36d6d6cd-b037-4b8b-af74-52af62b08fb6-kube-api-access-vrcbd\") pod \"metallb-operator-webhook-server-7899c8c964-mnspp\" (UID: \"36d6d6cd-b037-4b8b-af74-52af62b08fb6\") " pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.324359 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.324968 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr"] Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.468324 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" event={"ID":"382b1716-fd63-460f-a84f-0c37f695d08f","Type":"ContainerStarted","Data":"47120b92434824ba66456a60fb4233f937b281c4ed214ac9c48c7ae77b1fe025"} Jan 27 17:11:46 crc kubenswrapper[5049]: I0127 17:11:46.764164 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp"] Jan 27 17:11:46 crc kubenswrapper[5049]: W0127 17:11:46.768731 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36d6d6cd_b037_4b8b_af74_52af62b08fb6.slice/crio-cc93730f8935911dda4d5014443b46a06e80ba458056177e1078f98697e0bb3b WatchSource:0}: Error finding container cc93730f8935911dda4d5014443b46a06e80ba458056177e1078f98697e0bb3b: Status 404 returned error can't find the container with id cc93730f8935911dda4d5014443b46a06e80ba458056177e1078f98697e0bb3b Jan 27 17:11:47 crc kubenswrapper[5049]: I0127 17:11:47.474367 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" event={"ID":"36d6d6cd-b037-4b8b-af74-52af62b08fb6","Type":"ContainerStarted","Data":"cc93730f8935911dda4d5014443b46a06e80ba458056177e1078f98697e0bb3b"} Jan 27 17:11:49 crc kubenswrapper[5049]: I0127 17:11:49.485301 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" event={"ID":"382b1716-fd63-460f-a84f-0c37f695d08f","Type":"ContainerStarted","Data":"1bbb15c893a6e69e68e7fcfee5375028b96f7b9b2dbdeac7eb48b352c7d3a447"} Jan 27 17:11:49 crc kubenswrapper[5049]: I0127 17:11:49.485624 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:11:49 crc kubenswrapper[5049]: I0127 17:11:49.511659 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" podStartSLOduration=1.71441818 podStartE2EDuration="4.511637279s" podCreationTimestamp="2026-01-27 17:11:45 +0000 UTC" firstStartedPulling="2026-01-27 17:11:46.343809464 +0000 UTC m=+881.442783013" lastFinishedPulling="2026-01-27 17:11:49.141028563 +0000 UTC m=+884.240002112" observedRunningTime="2026-01-27 17:11:49.508381175 +0000 UTC m=+884.607354734" watchObservedRunningTime="2026-01-27 17:11:49.511637279 +0000 UTC m=+884.610610828" Jan 27 17:11:51 crc kubenswrapper[5049]: I0127 17:11:51.511541 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" event={"ID":"36d6d6cd-b037-4b8b-af74-52af62b08fb6","Type":"ContainerStarted","Data":"8a6fceb080e68ed3a688771ad7a7ff33f6442d6164d7d92f4f72b6016d5d2b5b"} Jan 27 17:11:51 crc kubenswrapper[5049]: I0127 17:11:51.511906 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:11:51 crc kubenswrapper[5049]: I0127 17:11:51.557506 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" podStartSLOduration=2.6612023049999998 podStartE2EDuration="6.557479762s" podCreationTimestamp="2026-01-27 17:11:45 +0000 UTC" firstStartedPulling="2026-01-27 17:11:46.771276922 +0000 UTC m=+881.870250471" lastFinishedPulling="2026-01-27 17:11:50.667554379 +0000 UTC m=+885.766527928" observedRunningTime="2026-01-27 17:11:51.556516004 +0000 UTC m=+886.655489553" watchObservedRunningTime="2026-01-27 17:11:51.557479762 +0000 UTC m=+886.656453321" Jan 27 17:12:06 crc kubenswrapper[5049]: I0127 17:12:06.331286 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7899c8c964-mnspp" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.072168 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6779f5b7c7-sntlr" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.791888 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-cspr9"] Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.805267 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.812029 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-b2mcl" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.812311 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.812559 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.819142 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dwck\" (UniqueName: \"kubernetes.io/projected/d6eb1833-4714-4674-b8c8-c8d367b09d77-kube-api-access-5dwck\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.819233 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-conf\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.819264 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-startup\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.819280 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6eb1833-4714-4674-b8c8-c8d367b09d77-metrics-certs\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.819298 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-reloader\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.819340 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-sockets\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.819359 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-metrics\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.819878 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc"] Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.831704 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc"] Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.831820 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.833330 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.890633 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-plq87"] Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.891576 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-plq87" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.895009 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.895159 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-w2x87" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.895169 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.895378 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.902925 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8ngsz"] Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.904428 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.906418 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.920270 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-conf\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.920321 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-startup\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.920338 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6eb1833-4714-4674-b8c8-c8d367b09d77-metrics-certs\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.920359 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-reloader\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.920400 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-sockets\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.920418 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-metrics\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.920440 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dwck\" (UniqueName: \"kubernetes.io/projected/d6eb1833-4714-4674-b8c8-c8d367b09d77-kube-api-access-5dwck\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.921528 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-conf\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.921869 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-reloader\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.922087 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-sockets\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.922237 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d6eb1833-4714-4674-b8c8-c8d367b09d77-metrics\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.922298 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d6eb1833-4714-4674-b8c8-c8d367b09d77-frr-startup\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.922824 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8ngsz"] Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.927019 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6eb1833-4714-4674-b8c8-c8d367b09d77-metrics-certs\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:26 crc kubenswrapper[5049]: I0127 17:12:26.955299 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dwck\" (UniqueName: \"kubernetes.io/projected/d6eb1833-4714-4674-b8c8-c8d367b09d77-kube-api-access-5dwck\") pod \"frr-k8s-cspr9\" (UID: \"d6eb1833-4714-4674-b8c8-c8d367b09d77\") " pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.022731 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-cert\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.022792 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvg5v\" (UniqueName: \"kubernetes.io/projected/a03921d3-73b0-4197-bce2-1c931a7d784e-kube-api-access-xvg5v\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.022825 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjq6\" (UniqueName: \"kubernetes.io/projected/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-kube-api-access-whjq6\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.022847 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-metrics-certs\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.022866 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-metallb-excludel2\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.022966 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e119304-dec2-43c2-8534-21be80f79b69-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bptwc\" (UID: \"3e119304-dec2-43c2-8534-21be80f79b69\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.022984 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.023001 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5l6\" (UniqueName: \"kubernetes.io/projected/3e119304-dec2-43c2-8534-21be80f79b69-kube-api-access-sj5l6\") pod \"frr-k8s-webhook-server-7df86c4f6c-bptwc\" (UID: \"3e119304-dec2-43c2-8534-21be80f79b69\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.023021 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-metrics-certs\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.123698 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e119304-dec2-43c2-8534-21be80f79b69-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bptwc\" (UID: \"3e119304-dec2-43c2-8534-21be80f79b69\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.123975 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.124007 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj5l6\" (UniqueName: \"kubernetes.io/projected/3e119304-dec2-43c2-8534-21be80f79b69-kube-api-access-sj5l6\") pod \"frr-k8s-webhook-server-7df86c4f6c-bptwc\" (UID: \"3e119304-dec2-43c2-8534-21be80f79b69\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.124035 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-metrics-certs\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.124064 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-cert\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.124093 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvg5v\" (UniqueName: \"kubernetes.io/projected/a03921d3-73b0-4197-bce2-1c931a7d784e-kube-api-access-xvg5v\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.124111 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whjq6\" (UniqueName: \"kubernetes.io/projected/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-kube-api-access-whjq6\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.124134 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-metrics-certs\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.124152 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-metallb-excludel2\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.124848 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-metallb-excludel2\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: E0127 17:12:27.124927 5049 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 17:12:27 crc kubenswrapper[5049]: E0127 17:12:27.124966 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist podName:5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2 nodeName:}" failed. No retries permitted until 2026-01-27 17:12:27.624952418 +0000 UTC m=+922.723925967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist") pod "speaker-plq87" (UID: "5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2") : secret "metallb-memberlist" not found Jan 27 17:12:27 crc kubenswrapper[5049]: E0127 17:12:27.125639 5049 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 27 17:12:27 crc kubenswrapper[5049]: E0127 17:12:27.125693 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-metrics-certs podName:a03921d3-73b0-4197-bce2-1c931a7d784e nodeName:}" failed. No retries permitted until 2026-01-27 17:12:27.625663739 +0000 UTC m=+922.724637288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-metrics-certs") pod "controller-6968d8fdc4-8ngsz" (UID: "a03921d3-73b0-4197-bce2-1c931a7d784e") : secret "controller-certs-secret" not found Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.127519 5049 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.132532 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.143890 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-cert\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.148891 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-metrics-certs\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.149755 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3e119304-dec2-43c2-8534-21be80f79b69-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bptwc\" (UID: \"3e119304-dec2-43c2-8534-21be80f79b69\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.151659 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj5l6\" (UniqueName: \"kubernetes.io/projected/3e119304-dec2-43c2-8534-21be80f79b69-kube-api-access-sj5l6\") pod \"frr-k8s-webhook-server-7df86c4f6c-bptwc\" (UID: \"3e119304-dec2-43c2-8534-21be80f79b69\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.154308 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvg5v\" (UniqueName: \"kubernetes.io/projected/a03921d3-73b0-4197-bce2-1c931a7d784e-kube-api-access-xvg5v\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.164122 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjq6\" (UniqueName: \"kubernetes.io/projected/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-kube-api-access-whjq6\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.442404 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.630291 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-metrics-certs\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.630607 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:27 crc kubenswrapper[5049]: E0127 17:12:27.630752 5049 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 17:12:27 crc kubenswrapper[5049]: E0127 17:12:27.630800 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist podName:5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2 nodeName:}" failed. No retries permitted until 2026-01-27 17:12:28.630784492 +0000 UTC m=+923.729758041 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist") pod "speaker-plq87" (UID: "5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2") : secret "metallb-memberlist" not found Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.636365 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a03921d3-73b0-4197-bce2-1c931a7d784e-metrics-certs\") pod \"controller-6968d8fdc4-8ngsz\" (UID: \"a03921d3-73b0-4197-bce2-1c931a7d784e\") " pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.680937 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc"] Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.738267 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerStarted","Data":"dfeea868664effeec6ff7272479514697737854916435425af59fbfd90320c95"} Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.739810 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" event={"ID":"3e119304-dec2-43c2-8534-21be80f79b69","Type":"ContainerStarted","Data":"372d0bf90410e9175e16addb5458591113f53c3e9ffac58d6777694d039a708f"} Jan 27 17:12:27 crc kubenswrapper[5049]: I0127 17:12:27.821955 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.260200 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8ngsz"] Jan 27 17:12:28 crc kubenswrapper[5049]: W0127 17:12:28.264959 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda03921d3_73b0_4197_bce2_1c931a7d784e.slice/crio-35d9e3f3de28593762dc394c0841a3ca7c622ebe47999ffe015b4e0832a1097f WatchSource:0}: Error finding container 35d9e3f3de28593762dc394c0841a3ca7c622ebe47999ffe015b4e0832a1097f: Status 404 returned error can't find the container with id 35d9e3f3de28593762dc394c0841a3ca7c622ebe47999ffe015b4e0832a1097f Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.647454 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.653144 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2-memberlist\") pod \"speaker-plq87\" (UID: \"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2\") " pod="metallb-system/speaker-plq87" Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.707511 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-plq87" Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.749911 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8ngsz" event={"ID":"a03921d3-73b0-4197-bce2-1c931a7d784e","Type":"ContainerStarted","Data":"a58dd8eddd1e4c36f5e378ddef49b370c251fcffa351d38ced57e81ea1e6bd9a"} Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.749980 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8ngsz" event={"ID":"a03921d3-73b0-4197-bce2-1c931a7d784e","Type":"ContainerStarted","Data":"ccfbfd8b3d60affbdb2a7c7f8deb9a63650bf4821da1e2036245019e65c4fc1d"} Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.749995 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8ngsz" event={"ID":"a03921d3-73b0-4197-bce2-1c931a7d784e","Type":"ContainerStarted","Data":"35d9e3f3de28593762dc394c0841a3ca7c622ebe47999ffe015b4e0832a1097f"} Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.750188 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:28 crc kubenswrapper[5049]: I0127 17:12:28.765277 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8ngsz" podStartSLOduration=2.765258009 podStartE2EDuration="2.765258009s" podCreationTimestamp="2026-01-27 17:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:12:28.764629011 +0000 UTC m=+923.863602560" watchObservedRunningTime="2026-01-27 17:12:28.765258009 +0000 UTC m=+923.864231558" Jan 27 17:12:29 crc kubenswrapper[5049]: I0127 17:12:29.760412 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-plq87" event={"ID":"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2","Type":"ContainerStarted","Data":"c2f4eedb22800a3b1b2cbfb6592e32ec7ee6ac64d6ce6257e0190670b5c048f4"} Jan 27 17:12:29 crc kubenswrapper[5049]: I0127 17:12:29.760469 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-plq87" event={"ID":"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2","Type":"ContainerStarted","Data":"efc802a312aa1bdec3a422e3bba5f233b3f73c298138db9655856efeb46a9029"} Jan 27 17:12:29 crc kubenswrapper[5049]: I0127 17:12:29.760482 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-plq87" event={"ID":"5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2","Type":"ContainerStarted","Data":"ec578754cf3f39d3984bb1cbc68d262af1b9cde1fe523125431d6156172d46d8"} Jan 27 17:12:29 crc kubenswrapper[5049]: I0127 17:12:29.760814 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-plq87" Jan 27 17:12:29 crc kubenswrapper[5049]: I0127 17:12:29.784099 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-plq87" podStartSLOduration=3.784080066 podStartE2EDuration="3.784080066s" podCreationTimestamp="2026-01-27 17:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:12:29.782226342 +0000 UTC m=+924.881199881" watchObservedRunningTime="2026-01-27 17:12:29.784080066 +0000 UTC m=+924.883053615" Jan 27 17:12:34 crc kubenswrapper[5049]: I0127 17:12:34.789399 5049 generic.go:334] "Generic (PLEG): container finished" podID="d6eb1833-4714-4674-b8c8-c8d367b09d77" containerID="d2e085c0e9086ed5a24e7c893c45915ad36b703013c1f4e35277192f89a2999a" exitCode=0 Jan 27 17:12:34 crc kubenswrapper[5049]: I0127 17:12:34.789536 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerDied","Data":"d2e085c0e9086ed5a24e7c893c45915ad36b703013c1f4e35277192f89a2999a"} Jan 27 17:12:34 crc kubenswrapper[5049]: I0127 17:12:34.792509 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" event={"ID":"3e119304-dec2-43c2-8534-21be80f79b69","Type":"ContainerStarted","Data":"722d798652ad6f760cdb84bc8cd63db06224e1906e2ca0f648677385b066c03b"} Jan 27 17:12:34 crc kubenswrapper[5049]: I0127 17:12:34.792730 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:34 crc kubenswrapper[5049]: I0127 17:12:34.845712 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" podStartSLOduration=2.375199167 podStartE2EDuration="8.845659735s" podCreationTimestamp="2026-01-27 17:12:26 +0000 UTC" firstStartedPulling="2026-01-27 17:12:27.69816715 +0000 UTC m=+922.797140709" lastFinishedPulling="2026-01-27 17:12:34.168627718 +0000 UTC m=+929.267601277" observedRunningTime="2026-01-27 17:12:34.835049137 +0000 UTC m=+929.934022726" watchObservedRunningTime="2026-01-27 17:12:34.845659735 +0000 UTC m=+929.944633314" Jan 27 17:12:35 crc kubenswrapper[5049]: I0127 17:12:35.802174 5049 generic.go:334] "Generic (PLEG): container finished" podID="d6eb1833-4714-4674-b8c8-c8d367b09d77" containerID="a45fdb4de2bf23a1667a7475e55d4a4d14fbb1707905ec27fbdfdf6cd22090d7" exitCode=0 Jan 27 17:12:35 crc kubenswrapper[5049]: I0127 17:12:35.802226 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerDied","Data":"a45fdb4de2bf23a1667a7475e55d4a4d14fbb1707905ec27fbdfdf6cd22090d7"} Jan 27 17:12:36 crc kubenswrapper[5049]: I0127 17:12:36.811549 5049 generic.go:334] "Generic (PLEG): container finished" podID="d6eb1833-4714-4674-b8c8-c8d367b09d77" containerID="b2c618a0bc2227c14132dbefcc5cbc4f58409d39ed55436d5309b880717b62d1" exitCode=0 Jan 27 17:12:36 crc kubenswrapper[5049]: I0127 17:12:36.811643 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerDied","Data":"b2c618a0bc2227c14132dbefcc5cbc4f58409d39ed55436d5309b880717b62d1"} Jan 27 17:12:37 crc kubenswrapper[5049]: I0127 17:12:37.820460 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerStarted","Data":"039e76db44b0c4c533ec95c09280be2efa11b62bc8eb1c391a902fea4a5d99db"} Jan 27 17:12:37 crc kubenswrapper[5049]: I0127 17:12:37.820894 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerStarted","Data":"7f3de0004dbf870cee5eafdb50920f1fa7e162e9f31866c9fce571a3c98adaa4"} Jan 27 17:12:37 crc kubenswrapper[5049]: I0127 17:12:37.820918 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:37 crc kubenswrapper[5049]: I0127 17:12:37.820931 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerStarted","Data":"32fbf0ff97ae12c465a8bc7b693fafeff0c8e55e7af037d159686cbfec2864ec"} Jan 27 17:12:37 crc kubenswrapper[5049]: I0127 17:12:37.820943 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerStarted","Data":"4bf29c4615fd01da336a8c7fa5b0992f62478ef9959f65238ee2de579e2c566e"} Jan 27 17:12:37 crc kubenswrapper[5049]: I0127 17:12:37.820957 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerStarted","Data":"ea1bdd879ce18e31e2498059a6a1aa6ba8ad6705438804f7790e148af28d73c2"} Jan 27 17:12:37 crc kubenswrapper[5049]: I0127 17:12:37.820968 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-cspr9" event={"ID":"d6eb1833-4714-4674-b8c8-c8d367b09d77","Type":"ContainerStarted","Data":"ffc8b2740e709bb48dfe96a3d33f5be6180aa2d84d385222ca58072f7277da7e"} Jan 27 17:12:37 crc kubenswrapper[5049]: I0127 17:12:37.845968 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-cspr9" podStartSLOduration=4.985066975 podStartE2EDuration="11.845944804s" podCreationTimestamp="2026-01-27 17:12:26 +0000 UTC" firstStartedPulling="2026-01-27 17:12:27.289992693 +0000 UTC m=+922.388966242" lastFinishedPulling="2026-01-27 17:12:34.150870522 +0000 UTC m=+929.249844071" observedRunningTime="2026-01-27 17:12:37.840086254 +0000 UTC m=+932.939059823" watchObservedRunningTime="2026-01-27 17:12:37.845944804 +0000 UTC m=+932.944918373" Jan 27 17:12:42 crc kubenswrapper[5049]: I0127 17:12:42.133942 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:42 crc kubenswrapper[5049]: I0127 17:12:42.209615 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:47 crc kubenswrapper[5049]: I0127 17:12:47.137486 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-cspr9" Jan 27 17:12:47 crc kubenswrapper[5049]: I0127 17:12:47.451616 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bptwc" Jan 27 17:12:47 crc kubenswrapper[5049]: I0127 17:12:47.830413 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8ngsz" Jan 27 17:12:48 crc kubenswrapper[5049]: I0127 17:12:48.713185 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-plq87" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.483125 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm"] Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.484549 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.491388 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm"] Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.492241 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.685809 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.685965 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.686121 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2f4s\" (UniqueName: \"kubernetes.io/projected/88a53601-dcde-4640-9bc6-5fbb919a8efd-kube-api-access-j2f4s\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.787147 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.787270 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.787353 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2f4s\" (UniqueName: \"kubernetes.io/projected/88a53601-dcde-4640-9bc6-5fbb919a8efd-kube-api-access-j2f4s\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.787730 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.788026 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.831334 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2f4s\" (UniqueName: \"kubernetes.io/projected/88a53601-dcde-4640-9bc6-5fbb919a8efd-kube-api-access-j2f4s\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:50 crc kubenswrapper[5049]: I0127 17:12:50.855200 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:51 crc kubenswrapper[5049]: I0127 17:12:51.325188 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm"] Jan 27 17:12:51 crc kubenswrapper[5049]: W0127 17:12:51.335912 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88a53601_dcde_4640_9bc6_5fbb919a8efd.slice/crio-168efb7797e85a38d2d552a982c74cb6e45f45972a69f51336d64db197168408 WatchSource:0}: Error finding container 168efb7797e85a38d2d552a982c74cb6e45f45972a69f51336d64db197168408: Status 404 returned error can't find the container with id 168efb7797e85a38d2d552a982c74cb6e45f45972a69f51336d64db197168408 Jan 27 17:12:51 crc kubenswrapper[5049]: I0127 17:12:51.911923 5049 generic.go:334] "Generic (PLEG): container finished" podID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerID="d8b8f3bb39f88f69be7292cc2c9a36da1df8b31e6f44cf5bc62647549a433b4d" exitCode=0 Jan 27 17:12:51 crc kubenswrapper[5049]: I0127 17:12:51.912225 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" event={"ID":"88a53601-dcde-4640-9bc6-5fbb919a8efd","Type":"ContainerDied","Data":"d8b8f3bb39f88f69be7292cc2c9a36da1df8b31e6f44cf5bc62647549a433b4d"} Jan 27 17:12:51 crc kubenswrapper[5049]: I0127 17:12:51.912263 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" event={"ID":"88a53601-dcde-4640-9bc6-5fbb919a8efd","Type":"ContainerStarted","Data":"168efb7797e85a38d2d552a982c74cb6e45f45972a69f51336d64db197168408"} Jan 27 17:12:54 crc kubenswrapper[5049]: I0127 17:12:54.943484 5049 generic.go:334] "Generic (PLEG): container finished" podID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerID="56f5b193021fbd65d5051dd8473a1915dbb482abc092cb91760ef754475845c8" exitCode=0 Jan 27 17:12:54 crc kubenswrapper[5049]: I0127 17:12:54.943715 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" event={"ID":"88a53601-dcde-4640-9bc6-5fbb919a8efd","Type":"ContainerDied","Data":"56f5b193021fbd65d5051dd8473a1915dbb482abc092cb91760ef754475845c8"} Jan 27 17:12:55 crc kubenswrapper[5049]: I0127 17:12:55.954004 5049 generic.go:334] "Generic (PLEG): container finished" podID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerID="19eb888a2b5fff2556357a161e29811c4dc060f1f6b89e872f62fcfb071b5463" exitCode=0 Jan 27 17:12:55 crc kubenswrapper[5049]: I0127 17:12:55.954086 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" event={"ID":"88a53601-dcde-4640-9bc6-5fbb919a8efd","Type":"ContainerDied","Data":"19eb888a2b5fff2556357a161e29811c4dc060f1f6b89e872f62fcfb071b5463"} Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.275827 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.287195 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2f4s\" (UniqueName: \"kubernetes.io/projected/88a53601-dcde-4640-9bc6-5fbb919a8efd-kube-api-access-j2f4s\") pod \"88a53601-dcde-4640-9bc6-5fbb919a8efd\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.287603 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-util\") pod \"88a53601-dcde-4640-9bc6-5fbb919a8efd\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.287724 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-bundle\") pod \"88a53601-dcde-4640-9bc6-5fbb919a8efd\" (UID: \"88a53601-dcde-4640-9bc6-5fbb919a8efd\") " Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.288979 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-bundle" (OuterVolumeSpecName: "bundle") pod "88a53601-dcde-4640-9bc6-5fbb919a8efd" (UID: "88a53601-dcde-4640-9bc6-5fbb919a8efd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.295898 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a53601-dcde-4640-9bc6-5fbb919a8efd-kube-api-access-j2f4s" (OuterVolumeSpecName: "kube-api-access-j2f4s") pod "88a53601-dcde-4640-9bc6-5fbb919a8efd" (UID: "88a53601-dcde-4640-9bc6-5fbb919a8efd"). InnerVolumeSpecName "kube-api-access-j2f4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.298435 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-util" (OuterVolumeSpecName: "util") pod "88a53601-dcde-4640-9bc6-5fbb919a8efd" (UID: "88a53601-dcde-4640-9bc6-5fbb919a8efd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.389638 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2f4s\" (UniqueName: \"kubernetes.io/projected/88a53601-dcde-4640-9bc6-5fbb919a8efd-kube-api-access-j2f4s\") on node \"crc\" DevicePath \"\"" Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.389747 5049 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-util\") on node \"crc\" DevicePath \"\"" Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.389763 5049 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/88a53601-dcde-4640-9bc6-5fbb919a8efd-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.971262 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" event={"ID":"88a53601-dcde-4640-9bc6-5fbb919a8efd","Type":"ContainerDied","Data":"168efb7797e85a38d2d552a982c74cb6e45f45972a69f51336d64db197168408"} Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.971748 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168efb7797e85a38d2d552a982c74cb6e45f45972a69f51336d64db197168408" Jan 27 17:12:57 crc kubenswrapper[5049]: I0127 17:12:57.971890 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.427901 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-klhhr"] Jan 27 17:12:59 crc kubenswrapper[5049]: E0127 17:12:59.428468 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerName="pull" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.428490 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerName="pull" Jan 27 17:12:59 crc kubenswrapper[5049]: E0127 17:12:59.428550 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerName="util" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.428564 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerName="util" Jan 27 17:12:59 crc kubenswrapper[5049]: E0127 17:12:59.428628 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerName="extract" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.428643 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerName="extract" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.428984 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a53601-dcde-4640-9bc6-5fbb919a8efd" containerName="extract" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.430433 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.452295 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klhhr"] Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.514026 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-catalog-content\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.514113 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc8bg\" (UniqueName: \"kubernetes.io/projected/a01703fa-3762-4dd4-99d8-5f911bdc85b5-kube-api-access-nc8bg\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.514174 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-utilities\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.614783 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc8bg\" (UniqueName: \"kubernetes.io/projected/a01703fa-3762-4dd4-99d8-5f911bdc85b5-kube-api-access-nc8bg\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.615064 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-utilities\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.615213 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-catalog-content\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.615549 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-utilities\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.615736 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-catalog-content\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.640014 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc8bg\" (UniqueName: \"kubernetes.io/projected/a01703fa-3762-4dd4-99d8-5f911bdc85b5-kube-api-access-nc8bg\") pod \"community-operators-klhhr\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:12:59 crc kubenswrapper[5049]: I0127 17:12:59.763654 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:13:00 crc kubenswrapper[5049]: I0127 17:13:00.269986 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klhhr"] Jan 27 17:13:00 crc kubenswrapper[5049]: W0127 17:13:00.300189 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda01703fa_3762_4dd4_99d8_5f911bdc85b5.slice/crio-4fd608a003fb060ed91a14ab4bb614de937f15d8728cfcb96398ff320e3f9d66 WatchSource:0}: Error finding container 4fd608a003fb060ed91a14ab4bb614de937f15d8728cfcb96398ff320e3f9d66: Status 404 returned error can't find the container with id 4fd608a003fb060ed91a14ab4bb614de937f15d8728cfcb96398ff320e3f9d66 Jan 27 17:13:00 crc kubenswrapper[5049]: I0127 17:13:00.991897 5049 generic.go:334] "Generic (PLEG): container finished" podID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerID="99959444824e8c2b5d12b600e9c355d1cb0314f0ad62ca7e4e1e0aceb7a4a852" exitCode=0 Jan 27 17:13:00 crc kubenswrapper[5049]: I0127 17:13:00.991959 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klhhr" event={"ID":"a01703fa-3762-4dd4-99d8-5f911bdc85b5","Type":"ContainerDied","Data":"99959444824e8c2b5d12b600e9c355d1cb0314f0ad62ca7e4e1e0aceb7a4a852"} Jan 27 17:13:00 crc kubenswrapper[5049]: I0127 17:13:00.992154 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klhhr" event={"ID":"a01703fa-3762-4dd4-99d8-5f911bdc85b5","Type":"ContainerStarted","Data":"4fd608a003fb060ed91a14ab4bb614de937f15d8728cfcb96398ff320e3f9d66"} Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.343877 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5"] Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.344697 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.346919 5049 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-9lnl2" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.348353 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.349550 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.363233 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5"] Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.456949 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9qs\" (UniqueName: \"kubernetes.io/projected/2bae6706-904b-4f39-83fc-05d1cc17c78a-kube-api-access-8m9qs\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7wtt5\" (UID: \"2bae6706-904b-4f39-83fc-05d1cc17c78a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.457022 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2bae6706-904b-4f39-83fc-05d1cc17c78a-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7wtt5\" (UID: \"2bae6706-904b-4f39-83fc-05d1cc17c78a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.558783 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m9qs\" (UniqueName: \"kubernetes.io/projected/2bae6706-904b-4f39-83fc-05d1cc17c78a-kube-api-access-8m9qs\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7wtt5\" (UID: \"2bae6706-904b-4f39-83fc-05d1cc17c78a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.558858 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2bae6706-904b-4f39-83fc-05d1cc17c78a-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7wtt5\" (UID: \"2bae6706-904b-4f39-83fc-05d1cc17c78a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.559559 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2bae6706-904b-4f39-83fc-05d1cc17c78a-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7wtt5\" (UID: \"2bae6706-904b-4f39-83fc-05d1cc17c78a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.577801 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m9qs\" (UniqueName: \"kubernetes.io/projected/2bae6706-904b-4f39-83fc-05d1cc17c78a-kube-api-access-8m9qs\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7wtt5\" (UID: \"2bae6706-904b-4f39-83fc-05d1cc17c78a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" Jan 27 17:13:02 crc kubenswrapper[5049]: I0127 17:13:02.659783 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" Jan 27 17:13:03 crc kubenswrapper[5049]: I0127 17:13:03.150394 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5"] Jan 27 17:13:03 crc kubenswrapper[5049]: W0127 17:13:03.155762 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bae6706_904b_4f39_83fc_05d1cc17c78a.slice/crio-f1f906fc4fbb647cd432d97383afce79f6d840bcc58d824db6280f451d266c2c WatchSource:0}: Error finding container f1f906fc4fbb647cd432d97383afce79f6d840bcc58d824db6280f451d266c2c: Status 404 returned error can't find the container with id f1f906fc4fbb647cd432d97383afce79f6d840bcc58d824db6280f451d266c2c Jan 27 17:13:04 crc kubenswrapper[5049]: I0127 17:13:04.014202 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" event={"ID":"2bae6706-904b-4f39-83fc-05d1cc17c78a","Type":"ContainerStarted","Data":"f1f906fc4fbb647cd432d97383afce79f6d840bcc58d824db6280f451d266c2c"} Jan 27 17:13:05 crc kubenswrapper[5049]: I0127 17:13:05.820859 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xvfdb"] Jan 27 17:13:05 crc kubenswrapper[5049]: I0127 17:13:05.822713 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:05 crc kubenswrapper[5049]: I0127 17:13:05.831222 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvfdb"] Jan 27 17:13:05 crc kubenswrapper[5049]: I0127 17:13:05.948694 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qscf\" (UniqueName: \"kubernetes.io/projected/4af949e8-9546-43ca-ae17-82238e2169f2-kube-api-access-5qscf\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:05 crc kubenswrapper[5049]: I0127 17:13:05.948821 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-utilities\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:05 crc kubenswrapper[5049]: I0127 17:13:05.948846 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-catalog-content\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.028226 5049 generic.go:334] "Generic (PLEG): container finished" podID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerID="04619863374d76c28ae28f7af9c58cb25c90e741b89e96c9b656199b34349300" exitCode=0 Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.028309 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klhhr" event={"ID":"a01703fa-3762-4dd4-99d8-5f911bdc85b5","Type":"ContainerDied","Data":"04619863374d76c28ae28f7af9c58cb25c90e741b89e96c9b656199b34349300"} Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.054108 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-utilities\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.054160 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-catalog-content\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.054215 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qscf\" (UniqueName: \"kubernetes.io/projected/4af949e8-9546-43ca-ae17-82238e2169f2-kube-api-access-5qscf\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.055095 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-catalog-content\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.055241 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-utilities\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.080422 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qscf\" (UniqueName: \"kubernetes.io/projected/4af949e8-9546-43ca-ae17-82238e2169f2-kube-api-access-5qscf\") pod \"certified-operators-xvfdb\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.167709 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:06 crc kubenswrapper[5049]: I0127 17:13:06.451380 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xvfdb"] Jan 27 17:13:07 crc kubenswrapper[5049]: I0127 17:13:07.052729 5049 generic.go:334] "Generic (PLEG): container finished" podID="4af949e8-9546-43ca-ae17-82238e2169f2" containerID="cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5" exitCode=0 Jan 27 17:13:07 crc kubenswrapper[5049]: I0127 17:13:07.052928 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvfdb" event={"ID":"4af949e8-9546-43ca-ae17-82238e2169f2","Type":"ContainerDied","Data":"cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5"} Jan 27 17:13:07 crc kubenswrapper[5049]: I0127 17:13:07.052977 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvfdb" event={"ID":"4af949e8-9546-43ca-ae17-82238e2169f2","Type":"ContainerStarted","Data":"da1826f656fbfd26bdec5e1640b1881cfb05a818002e1f5941c370c9e878a4bb"} Jan 27 17:13:07 crc kubenswrapper[5049]: I0127 17:13:07.073336 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klhhr" event={"ID":"a01703fa-3762-4dd4-99d8-5f911bdc85b5","Type":"ContainerStarted","Data":"fb79725d74f771b76164fb03d1e54849261edd176cffccfcbef2d1d11255cb11"} Jan 27 17:13:07 crc kubenswrapper[5049]: I0127 17:13:07.095229 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-klhhr" podStartSLOduration=2.569316608 podStartE2EDuration="8.095193885s" podCreationTimestamp="2026-01-27 17:12:59 +0000 UTC" firstStartedPulling="2026-01-27 17:13:00.993871282 +0000 UTC m=+956.092844841" lastFinishedPulling="2026-01-27 17:13:06.519748569 +0000 UTC m=+961.618722118" observedRunningTime="2026-01-27 17:13:07.091999393 +0000 UTC m=+962.190972952" watchObservedRunningTime="2026-01-27 17:13:07.095193885 +0000 UTC m=+962.194167424" Jan 27 17:13:09 crc kubenswrapper[5049]: I0127 17:13:09.092321 5049 generic.go:334] "Generic (PLEG): container finished" podID="4af949e8-9546-43ca-ae17-82238e2169f2" containerID="5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4" exitCode=0 Jan 27 17:13:09 crc kubenswrapper[5049]: I0127 17:13:09.092377 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvfdb" event={"ID":"4af949e8-9546-43ca-ae17-82238e2169f2","Type":"ContainerDied","Data":"5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4"} Jan 27 17:13:09 crc kubenswrapper[5049]: I0127 17:13:09.764306 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:13:09 crc kubenswrapper[5049]: I0127 17:13:09.764354 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:13:09 crc kubenswrapper[5049]: I0127 17:13:09.814121 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.227868 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k2wqw"] Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.232778 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.245218 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2wqw"] Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.264796 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-catalog-content\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.264868 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfscq\" (UniqueName: \"kubernetes.io/projected/d81f4771-673d-4440-8fd6-c964024aa07b-kube-api-access-pfscq\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.264926 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-utilities\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.366504 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-catalog-content\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.366562 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfscq\" (UniqueName: \"kubernetes.io/projected/d81f4771-673d-4440-8fd6-c964024aa07b-kube-api-access-pfscq\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.366585 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-utilities\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.367109 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-catalog-content\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.367122 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-utilities\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.389581 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfscq\" (UniqueName: \"kubernetes.io/projected/d81f4771-673d-4440-8fd6-c964024aa07b-kube-api-access-pfscq\") pod \"redhat-marketplace-k2wqw\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:12 crc kubenswrapper[5049]: I0127 17:13:12.568133 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:13 crc kubenswrapper[5049]: I0127 17:13:13.469123 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2wqw"] Jan 27 17:13:13 crc kubenswrapper[5049]: W0127 17:13:13.485114 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd81f4771_673d_4440_8fd6_c964024aa07b.slice/crio-dc8e4e5e7d2bfbf354ec31d05955cf76799490b811a1fdbc51a5d4627ccf5f5e WatchSource:0}: Error finding container dc8e4e5e7d2bfbf354ec31d05955cf76799490b811a1fdbc51a5d4627ccf5f5e: Status 404 returned error can't find the container with id dc8e4e5e7d2bfbf354ec31d05955cf76799490b811a1fdbc51a5d4627ccf5f5e Jan 27 17:13:14 crc kubenswrapper[5049]: I0127 17:13:14.129978 5049 generic.go:334] "Generic (PLEG): container finished" podID="d81f4771-673d-4440-8fd6-c964024aa07b" containerID="94696b324c8e06b52a0fe3b7af3963dd6bfa8d6165fe13b189b5c125bca64afd" exitCode=0 Jan 27 17:13:14 crc kubenswrapper[5049]: I0127 17:13:14.130042 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2wqw" event={"ID":"d81f4771-673d-4440-8fd6-c964024aa07b","Type":"ContainerDied","Data":"94696b324c8e06b52a0fe3b7af3963dd6bfa8d6165fe13b189b5c125bca64afd"} Jan 27 17:13:14 crc kubenswrapper[5049]: I0127 17:13:14.130320 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2wqw" event={"ID":"d81f4771-673d-4440-8fd6-c964024aa07b","Type":"ContainerStarted","Data":"dc8e4e5e7d2bfbf354ec31d05955cf76799490b811a1fdbc51a5d4627ccf5f5e"} Jan 27 17:13:14 crc kubenswrapper[5049]: I0127 17:13:14.132512 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvfdb" event={"ID":"4af949e8-9546-43ca-ae17-82238e2169f2","Type":"ContainerStarted","Data":"8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee"} Jan 27 17:13:14 crc kubenswrapper[5049]: I0127 17:13:14.136248 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" event={"ID":"2bae6706-904b-4f39-83fc-05d1cc17c78a","Type":"ContainerStarted","Data":"6467b3b6315a1b88310d77cfa090a631b8b542ce547d3947ddda0e8caea57e60"} Jan 27 17:13:14 crc kubenswrapper[5049]: I0127 17:13:14.190267 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7wtt5" podStartSLOduration=2.256361197 podStartE2EDuration="12.190243667s" podCreationTimestamp="2026-01-27 17:13:02 +0000 UTC" firstStartedPulling="2026-01-27 17:13:03.158939588 +0000 UTC m=+958.257913137" lastFinishedPulling="2026-01-27 17:13:13.092822058 +0000 UTC m=+968.191795607" observedRunningTime="2026-01-27 17:13:14.185077637 +0000 UTC m=+969.284051226" watchObservedRunningTime="2026-01-27 17:13:14.190243667 +0000 UTC m=+969.289217216" Jan 27 17:13:14 crc kubenswrapper[5049]: I0127 17:13:14.191598 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xvfdb" podStartSLOduration=3.178375 podStartE2EDuration="9.191589466s" podCreationTimestamp="2026-01-27 17:13:05 +0000 UTC" firstStartedPulling="2026-01-27 17:13:07.054571495 +0000 UTC m=+962.153545044" lastFinishedPulling="2026-01-27 17:13:13.067785971 +0000 UTC m=+968.166759510" observedRunningTime="2026-01-27 17:13:14.168027322 +0000 UTC m=+969.267000871" watchObservedRunningTime="2026-01-27 17:13:14.191589466 +0000 UTC m=+969.290563015" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.079154 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w4pq6"] Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.080086 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.087034 5049 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-d52dx" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.088488 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.088638 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.096779 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w4pq6"] Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.148572 5049 generic.go:334] "Generic (PLEG): container finished" podID="d81f4771-673d-4440-8fd6-c964024aa07b" containerID="5eead88f8010bc20c5b93aff572df0792fbc628da6615966bd2f6e57e048117b" exitCode=0 Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.148603 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2wqw" event={"ID":"d81f4771-673d-4440-8fd6-c964024aa07b","Type":"ContainerDied","Data":"5eead88f8010bc20c5b93aff572df0792fbc628da6615966bd2f6e57e048117b"} Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.169717 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.169750 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.217875 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.246787 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mvn7\" (UniqueName: \"kubernetes.io/projected/d57413ac-f34c-430e-9c96-18f0da415614-kube-api-access-9mvn7\") pod \"cert-manager-webhook-f4fb5df64-w4pq6\" (UID: \"d57413ac-f34c-430e-9c96-18f0da415614\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.246852 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d57413ac-f34c-430e-9c96-18f0da415614-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w4pq6\" (UID: \"d57413ac-f34c-430e-9c96-18f0da415614\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.348456 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d57413ac-f34c-430e-9c96-18f0da415614-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w4pq6\" (UID: \"d57413ac-f34c-430e-9c96-18f0da415614\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.348590 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mvn7\" (UniqueName: \"kubernetes.io/projected/d57413ac-f34c-430e-9c96-18f0da415614-kube-api-access-9mvn7\") pod \"cert-manager-webhook-f4fb5df64-w4pq6\" (UID: \"d57413ac-f34c-430e-9c96-18f0da415614\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.374727 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d57413ac-f34c-430e-9c96-18f0da415614-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w4pq6\" (UID: \"d57413ac-f34c-430e-9c96-18f0da415614\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.377584 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mvn7\" (UniqueName: \"kubernetes.io/projected/d57413ac-f34c-430e-9c96-18f0da415614-kube-api-access-9mvn7\") pod \"cert-manager-webhook-f4fb5df64-w4pq6\" (UID: \"d57413ac-f34c-430e-9c96-18f0da415614\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.399460 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:16 crc kubenswrapper[5049]: I0127 17:13:16.930182 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w4pq6"] Jan 27 17:13:16 crc kubenswrapper[5049]: W0127 17:13:16.936533 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57413ac_f34c_430e_9c96_18f0da415614.slice/crio-5ff09540d7ecd0d6943a974578f083f4e8e783530a67179cb19be28ab484cbe4 WatchSource:0}: Error finding container 5ff09540d7ecd0d6943a974578f083f4e8e783530a67179cb19be28ab484cbe4: Status 404 returned error can't find the container with id 5ff09540d7ecd0d6943a974578f083f4e8e783530a67179cb19be28ab484cbe4 Jan 27 17:13:17 crc kubenswrapper[5049]: I0127 17:13:17.166605 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2wqw" event={"ID":"d81f4771-673d-4440-8fd6-c964024aa07b","Type":"ContainerStarted","Data":"7f21eac1fd59385d50a3b5120d428fc1491c119aa743a460cd27981ed6f97599"} Jan 27 17:13:17 crc kubenswrapper[5049]: I0127 17:13:17.167663 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" event={"ID":"d57413ac-f34c-430e-9c96-18f0da415614","Type":"ContainerStarted","Data":"5ff09540d7ecd0d6943a974578f083f4e8e783530a67179cb19be28ab484cbe4"} Jan 27 17:13:17 crc kubenswrapper[5049]: I0127 17:13:17.182784 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k2wqw" podStartSLOduration=2.724307869 podStartE2EDuration="5.182759668s" podCreationTimestamp="2026-01-27 17:13:12 +0000 UTC" firstStartedPulling="2026-01-27 17:13:14.132281524 +0000 UTC m=+969.231255073" lastFinishedPulling="2026-01-27 17:13:16.590733323 +0000 UTC m=+971.689706872" observedRunningTime="2026-01-27 17:13:17.181915754 +0000 UTC m=+972.280889303" watchObservedRunningTime="2026-01-27 17:13:17.182759668 +0000 UTC m=+972.281733237" Jan 27 17:13:19 crc kubenswrapper[5049]: I0127 17:13:19.823414 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-klhhr" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.247361 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n"] Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.248279 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.251950 5049 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-t2qzj" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.260776 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n"] Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.400604 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2hjc\" (UniqueName: \"kubernetes.io/projected/4091c9ea-7f48-4824-867f-378f7a2a8c04-kube-api-access-w2hjc\") pod \"cert-manager-cainjector-855d9ccff4-wcq9n\" (UID: \"4091c9ea-7f48-4824-867f-378f7a2a8c04\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.400736 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4091c9ea-7f48-4824-867f-378f7a2a8c04-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wcq9n\" (UID: \"4091c9ea-7f48-4824-867f-378f7a2a8c04\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.502453 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4091c9ea-7f48-4824-867f-378f7a2a8c04-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wcq9n\" (UID: \"4091c9ea-7f48-4824-867f-378f7a2a8c04\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.502567 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2hjc\" (UniqueName: \"kubernetes.io/projected/4091c9ea-7f48-4824-867f-378f7a2a8c04-kube-api-access-w2hjc\") pod \"cert-manager-cainjector-855d9ccff4-wcq9n\" (UID: \"4091c9ea-7f48-4824-867f-378f7a2a8c04\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.521597 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4091c9ea-7f48-4824-867f-378f7a2a8c04-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wcq9n\" (UID: \"4091c9ea-7f48-4824-867f-378f7a2a8c04\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.534040 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2hjc\" (UniqueName: \"kubernetes.io/projected/4091c9ea-7f48-4824-867f-378f7a2a8c04-kube-api-access-w2hjc\") pod \"cert-manager-cainjector-855d9ccff4-wcq9n\" (UID: \"4091c9ea-7f48-4824-867f-378f7a2a8c04\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" Jan 27 17:13:20 crc kubenswrapper[5049]: I0127 17:13:20.611902 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.098857 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n"] Jan 27 17:13:21 crc kubenswrapper[5049]: W0127 17:13:21.108969 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4091c9ea_7f48_4824_867f_378f7a2a8c04.slice/crio-e0371595a20f3f526571d09c2fcc5bbb85ef04ea89b0c9df9bf6e90433c6bea5 WatchSource:0}: Error finding container e0371595a20f3f526571d09c2fcc5bbb85ef04ea89b0c9df9bf6e90433c6bea5: Status 404 returned error can't find the container with id e0371595a20f3f526571d09c2fcc5bbb85ef04ea89b0c9df9bf6e90433c6bea5 Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.193072 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" event={"ID":"4091c9ea-7f48-4824-867f-378f7a2a8c04","Type":"ContainerStarted","Data":"e0371595a20f3f526571d09c2fcc5bbb85ef04ea89b0c9df9bf6e90433c6bea5"} Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.250785 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klhhr"] Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.415525 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdwcs"] Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.415818 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pdwcs" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerName="registry-server" containerID="cri-o://40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986" gracePeriod=2 Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.778166 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.936061 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmhlm\" (UniqueName: \"kubernetes.io/projected/cea0ebf5-d5b2-4215-923e-df9a49b83828-kube-api-access-qmhlm\") pod \"cea0ebf5-d5b2-4215-923e-df9a49b83828\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.936523 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-catalog-content\") pod \"cea0ebf5-d5b2-4215-923e-df9a49b83828\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.936569 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-utilities\") pod \"cea0ebf5-d5b2-4215-923e-df9a49b83828\" (UID: \"cea0ebf5-d5b2-4215-923e-df9a49b83828\") " Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.937803 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-utilities" (OuterVolumeSpecName: "utilities") pod "cea0ebf5-d5b2-4215-923e-df9a49b83828" (UID: "cea0ebf5-d5b2-4215-923e-df9a49b83828"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.940948 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea0ebf5-d5b2-4215-923e-df9a49b83828-kube-api-access-qmhlm" (OuterVolumeSpecName: "kube-api-access-qmhlm") pod "cea0ebf5-d5b2-4215-923e-df9a49b83828" (UID: "cea0ebf5-d5b2-4215-923e-df9a49b83828"). InnerVolumeSpecName "kube-api-access-qmhlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:13:21 crc kubenswrapper[5049]: I0127 17:13:21.994264 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cea0ebf5-d5b2-4215-923e-df9a49b83828" (UID: "cea0ebf5-d5b2-4215-923e-df9a49b83828"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.063109 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.063171 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cea0ebf5-d5b2-4215-923e-df9a49b83828-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.063186 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmhlm\" (UniqueName: \"kubernetes.io/projected/cea0ebf5-d5b2-4215-923e-df9a49b83828-kube-api-access-qmhlm\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.204367 5049 generic.go:334] "Generic (PLEG): container finished" podID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerID="40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986" exitCode=0 Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.204426 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdwcs" event={"ID":"cea0ebf5-d5b2-4215-923e-df9a49b83828","Type":"ContainerDied","Data":"40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986"} Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.204485 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pdwcs" event={"ID":"cea0ebf5-d5b2-4215-923e-df9a49b83828","Type":"ContainerDied","Data":"88b54223fe0346a8d5564888a1c35ebf9f68affcc729803567e8eefdbca56efc"} Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.204504 5049 scope.go:117] "RemoveContainer" containerID="40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.204658 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pdwcs" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.227732 5049 scope.go:117] "RemoveContainer" containerID="80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.241918 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pdwcs"] Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.246827 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pdwcs"] Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.259986 5049 scope.go:117] "RemoveContainer" containerID="47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.282347 5049 scope.go:117] "RemoveContainer" containerID="40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986" Jan 27 17:13:22 crc kubenswrapper[5049]: E0127 17:13:22.282964 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986\": container with ID starting with 40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986 not found: ID does not exist" containerID="40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.282999 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986"} err="failed to get container status \"40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986\": rpc error: code = NotFound desc = could not find container \"40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986\": container with ID starting with 40ef394ecbc8b00620addf9c13181b4114a1b14ddc2ae742502c5fdcc20ac986 not found: ID does not exist" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.283025 5049 scope.go:117] "RemoveContainer" containerID="80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8" Jan 27 17:13:22 crc kubenswrapper[5049]: E0127 17:13:22.283361 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8\": container with ID starting with 80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8 not found: ID does not exist" containerID="80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.283393 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8"} err="failed to get container status \"80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8\": rpc error: code = NotFound desc = could not find container \"80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8\": container with ID starting with 80f0fca586e5985910e0240029b9c78bc4f22c078d88c90a5cccf90ac16bbfd8 not found: ID does not exist" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.283415 5049 scope.go:117] "RemoveContainer" containerID="47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15" Jan 27 17:13:22 crc kubenswrapper[5049]: E0127 17:13:22.283767 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15\": container with ID starting with 47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15 not found: ID does not exist" containerID="47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.283795 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15"} err="failed to get container status \"47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15\": rpc error: code = NotFound desc = could not find container \"47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15\": container with ID starting with 47d90ebc1e4ce724127fb9a7c0d8ece4d984b0f7421999027ae64f0085e55f15 not found: ID does not exist" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.568694 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.568987 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:22 crc kubenswrapper[5049]: I0127 17:13:22.620817 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:23 crc kubenswrapper[5049]: I0127 17:13:23.247102 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:23 crc kubenswrapper[5049]: I0127 17:13:23.658705 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" path="/var/lib/kubelet/pods/cea0ebf5-d5b2-4215-923e-df9a49b83828/volumes" Jan 27 17:13:24 crc kubenswrapper[5049]: I0127 17:13:24.612133 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2wqw"] Jan 27 17:13:26 crc kubenswrapper[5049]: I0127 17:13:26.229020 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:26 crc kubenswrapper[5049]: I0127 17:13:26.235131 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k2wqw" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" containerName="registry-server" containerID="cri-o://7f21eac1fd59385d50a3b5120d428fc1491c119aa743a460cd27981ed6f97599" gracePeriod=2 Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.243485 5049 generic.go:334] "Generic (PLEG): container finished" podID="d81f4771-673d-4440-8fd6-c964024aa07b" containerID="7f21eac1fd59385d50a3b5120d428fc1491c119aa743a460cd27981ed6f97599" exitCode=0 Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.243562 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2wqw" event={"ID":"d81f4771-673d-4440-8fd6-c964024aa07b","Type":"ContainerDied","Data":"7f21eac1fd59385d50a3b5120d428fc1491c119aa743a460cd27981ed6f97599"} Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.687517 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.740952 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-utilities\") pod \"d81f4771-673d-4440-8fd6-c964024aa07b\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.741037 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfscq\" (UniqueName: \"kubernetes.io/projected/d81f4771-673d-4440-8fd6-c964024aa07b-kube-api-access-pfscq\") pod \"d81f4771-673d-4440-8fd6-c964024aa07b\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.741079 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-catalog-content\") pod \"d81f4771-673d-4440-8fd6-c964024aa07b\" (UID: \"d81f4771-673d-4440-8fd6-c964024aa07b\") " Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.742337 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-utilities" (OuterVolumeSpecName: "utilities") pod "d81f4771-673d-4440-8fd6-c964024aa07b" (UID: "d81f4771-673d-4440-8fd6-c964024aa07b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.746140 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81f4771-673d-4440-8fd6-c964024aa07b-kube-api-access-pfscq" (OuterVolumeSpecName: "kube-api-access-pfscq") pod "d81f4771-673d-4440-8fd6-c964024aa07b" (UID: "d81f4771-673d-4440-8fd6-c964024aa07b"). InnerVolumeSpecName "kube-api-access-pfscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.775472 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d81f4771-673d-4440-8fd6-c964024aa07b" (UID: "d81f4771-673d-4440-8fd6-c964024aa07b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.784095 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-g4n6q"] Jan 27 17:13:27 crc kubenswrapper[5049]: E0127 17:13:27.786169 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" containerName="registry-server" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.786239 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" containerName="registry-server" Jan 27 17:13:27 crc kubenswrapper[5049]: E0127 17:13:27.786257 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerName="extract-utilities" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.786265 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerName="extract-utilities" Jan 27 17:13:27 crc kubenswrapper[5049]: E0127 17:13:27.786272 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" containerName="extract-utilities" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.786280 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" containerName="extract-utilities" Jan 27 17:13:27 crc kubenswrapper[5049]: E0127 17:13:27.786293 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" containerName="extract-content" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.786300 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" containerName="extract-content" Jan 27 17:13:27 crc kubenswrapper[5049]: E0127 17:13:27.786314 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerName="registry-server" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.786320 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerName="registry-server" Jan 27 17:13:27 crc kubenswrapper[5049]: E0127 17:13:27.786331 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerName="extract-content" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.786337 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerName="extract-content" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.786492 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea0ebf5-d5b2-4215-923e-df9a49b83828" containerName="registry-server" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.786510 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" containerName="registry-server" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.787240 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-g4n6q" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.792899 5049 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-q2mf2" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.808744 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-g4n6q"] Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.842295 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljlg\" (UniqueName: \"kubernetes.io/projected/6d85b59c-bed6-4dff-8ae5-cca3c210210e-kube-api-access-cljlg\") pod \"cert-manager-86cb77c54b-g4n6q\" (UID: \"6d85b59c-bed6-4dff-8ae5-cca3c210210e\") " pod="cert-manager/cert-manager-86cb77c54b-g4n6q" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.842400 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d85b59c-bed6-4dff-8ae5-cca3c210210e-bound-sa-token\") pod \"cert-manager-86cb77c54b-g4n6q\" (UID: \"6d85b59c-bed6-4dff-8ae5-cca3c210210e\") " pod="cert-manager/cert-manager-86cb77c54b-g4n6q" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.842461 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.842476 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81f4771-673d-4440-8fd6-c964024aa07b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.842489 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfscq\" (UniqueName: \"kubernetes.io/projected/d81f4771-673d-4440-8fd6-c964024aa07b-kube-api-access-pfscq\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.943476 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cljlg\" (UniqueName: \"kubernetes.io/projected/6d85b59c-bed6-4dff-8ae5-cca3c210210e-kube-api-access-cljlg\") pod \"cert-manager-86cb77c54b-g4n6q\" (UID: \"6d85b59c-bed6-4dff-8ae5-cca3c210210e\") " pod="cert-manager/cert-manager-86cb77c54b-g4n6q" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.943560 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d85b59c-bed6-4dff-8ae5-cca3c210210e-bound-sa-token\") pod \"cert-manager-86cb77c54b-g4n6q\" (UID: \"6d85b59c-bed6-4dff-8ae5-cca3c210210e\") " pod="cert-manager/cert-manager-86cb77c54b-g4n6q" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.965759 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d85b59c-bed6-4dff-8ae5-cca3c210210e-bound-sa-token\") pod \"cert-manager-86cb77c54b-g4n6q\" (UID: \"6d85b59c-bed6-4dff-8ae5-cca3c210210e\") " pod="cert-manager/cert-manager-86cb77c54b-g4n6q" Jan 27 17:13:27 crc kubenswrapper[5049]: I0127 17:13:27.979507 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cljlg\" (UniqueName: \"kubernetes.io/projected/6d85b59c-bed6-4dff-8ae5-cca3c210210e-kube-api-access-cljlg\") pod \"cert-manager-86cb77c54b-g4n6q\" (UID: \"6d85b59c-bed6-4dff-8ae5-cca3c210210e\") " pod="cert-manager/cert-manager-86cb77c54b-g4n6q" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.106775 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-g4n6q" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.283530 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2wqw" event={"ID":"d81f4771-673d-4440-8fd6-c964024aa07b","Type":"ContainerDied","Data":"dc8e4e5e7d2bfbf354ec31d05955cf76799490b811a1fdbc51a5d4627ccf5f5e"} Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.283806 5049 scope.go:117] "RemoveContainer" containerID="7f21eac1fd59385d50a3b5120d428fc1491c119aa743a460cd27981ed6f97599" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.283924 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2wqw" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.287548 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" event={"ID":"4091c9ea-7f48-4824-867f-378f7a2a8c04","Type":"ContainerStarted","Data":"05f1eda46dbbc8b8b47b565d636fbdc194166c2a4eda3101cc0eb1cf07c23f70"} Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.290112 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" event={"ID":"d57413ac-f34c-430e-9c96-18f0da415614","Type":"ContainerStarted","Data":"d624fc5b8af840f97a49655af943d7c532d0bfab3a115d6d6d5ca4be9cec4de4"} Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.290442 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.304284 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wcq9n" podStartSLOduration=1.9028726150000002 podStartE2EDuration="8.304265376s" podCreationTimestamp="2026-01-27 17:13:20 +0000 UTC" firstStartedPulling="2026-01-27 17:13:21.112874526 +0000 UTC m=+976.211848075" lastFinishedPulling="2026-01-27 17:13:27.514267267 +0000 UTC m=+982.613240836" observedRunningTime="2026-01-27 17:13:28.302742812 +0000 UTC m=+983.401716381" watchObservedRunningTime="2026-01-27 17:13:28.304265376 +0000 UTC m=+983.403238925" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.308894 5049 scope.go:117] "RemoveContainer" containerID="5eead88f8010bc20c5b93aff572df0792fbc628da6615966bd2f6e57e048117b" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.331722 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" podStartSLOduration=1.717692235 podStartE2EDuration="12.331698806s" podCreationTimestamp="2026-01-27 17:13:16 +0000 UTC" firstStartedPulling="2026-01-27 17:13:16.938995353 +0000 UTC m=+972.037968902" lastFinishedPulling="2026-01-27 17:13:27.553001924 +0000 UTC m=+982.651975473" observedRunningTime="2026-01-27 17:13:28.323123579 +0000 UTC m=+983.422097158" watchObservedRunningTime="2026-01-27 17:13:28.331698806 +0000 UTC m=+983.430672365" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.342844 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2wqw"] Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.344978 5049 scope.go:117] "RemoveContainer" containerID="94696b324c8e06b52a0fe3b7af3963dd6bfa8d6165fe13b189b5c125bca64afd" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.351231 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2wqw"] Jan 27 17:13:28 crc kubenswrapper[5049]: E0127 17:13:28.377142 5049 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd81f4771_673d_4440_8fd6_c964024aa07b.slice\": RecentStats: unable to find data in memory cache]" Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.558553 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-g4n6q"] Jan 27 17:13:28 crc kubenswrapper[5049]: W0127 17:13:28.563043 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d85b59c_bed6_4dff_8ae5_cca3c210210e.slice/crio-e269f82d88cd8c14187aff11efb9f83ae9f606dfe1f25d50aa621a5484c63537 WatchSource:0}: Error finding container e269f82d88cd8c14187aff11efb9f83ae9f606dfe1f25d50aa621a5484c63537: Status 404 returned error can't find the container with id e269f82d88cd8c14187aff11efb9f83ae9f606dfe1f25d50aa621a5484c63537 Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.609275 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvfdb"] Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.609494 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xvfdb" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" containerName="registry-server" containerID="cri-o://8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee" gracePeriod=2 Jan 27 17:13:28 crc kubenswrapper[5049]: I0127 17:13:28.967515 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.056266 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-catalog-content\") pod \"4af949e8-9546-43ca-ae17-82238e2169f2\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.056327 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qscf\" (UniqueName: \"kubernetes.io/projected/4af949e8-9546-43ca-ae17-82238e2169f2-kube-api-access-5qscf\") pod \"4af949e8-9546-43ca-ae17-82238e2169f2\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.056396 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-utilities\") pod \"4af949e8-9546-43ca-ae17-82238e2169f2\" (UID: \"4af949e8-9546-43ca-ae17-82238e2169f2\") " Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.057351 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-utilities" (OuterVolumeSpecName: "utilities") pod "4af949e8-9546-43ca-ae17-82238e2169f2" (UID: "4af949e8-9546-43ca-ae17-82238e2169f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.062817 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af949e8-9546-43ca-ae17-82238e2169f2-kube-api-access-5qscf" (OuterVolumeSpecName: "kube-api-access-5qscf") pod "4af949e8-9546-43ca-ae17-82238e2169f2" (UID: "4af949e8-9546-43ca-ae17-82238e2169f2"). InnerVolumeSpecName "kube-api-access-5qscf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.104883 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4af949e8-9546-43ca-ae17-82238e2169f2" (UID: "4af949e8-9546-43ca-ae17-82238e2169f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.157563 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.157595 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4af949e8-9546-43ca-ae17-82238e2169f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.157605 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qscf\" (UniqueName: \"kubernetes.io/projected/4af949e8-9546-43ca-ae17-82238e2169f2-kube-api-access-5qscf\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.297927 5049 generic.go:334] "Generic (PLEG): container finished" podID="4af949e8-9546-43ca-ae17-82238e2169f2" containerID="8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee" exitCode=0 Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.297973 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvfdb" event={"ID":"4af949e8-9546-43ca-ae17-82238e2169f2","Type":"ContainerDied","Data":"8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee"} Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.298394 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xvfdb" event={"ID":"4af949e8-9546-43ca-ae17-82238e2169f2","Type":"ContainerDied","Data":"da1826f656fbfd26bdec5e1640b1881cfb05a818002e1f5941c370c9e878a4bb"} Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.298430 5049 scope.go:117] "RemoveContainer" containerID="8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.298811 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xvfdb" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.301731 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-g4n6q" event={"ID":"6d85b59c-bed6-4dff-8ae5-cca3c210210e","Type":"ContainerStarted","Data":"057bdc8b5efd926d7c30d99a9e840ba386d8ceb0c23e36e0170e69425c277f68"} Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.301780 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-g4n6q" event={"ID":"6d85b59c-bed6-4dff-8ae5-cca3c210210e","Type":"ContainerStarted","Data":"e269f82d88cd8c14187aff11efb9f83ae9f606dfe1f25d50aa621a5484c63537"} Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.320435 5049 scope.go:117] "RemoveContainer" containerID="5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.335635 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-g4n6q" podStartSLOduration=2.335614689 podStartE2EDuration="2.335614689s" podCreationTimestamp="2026-01-27 17:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:13:29.331602603 +0000 UTC m=+984.430576152" watchObservedRunningTime="2026-01-27 17:13:29.335614689 +0000 UTC m=+984.434588238" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.352057 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xvfdb"] Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.367805 5049 scope.go:117] "RemoveContainer" containerID="cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.369306 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xvfdb"] Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.388469 5049 scope.go:117] "RemoveContainer" containerID="8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee" Jan 27 17:13:29 crc kubenswrapper[5049]: E0127 17:13:29.388853 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee\": container with ID starting with 8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee not found: ID does not exist" containerID="8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.388892 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee"} err="failed to get container status \"8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee\": rpc error: code = NotFound desc = could not find container \"8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee\": container with ID starting with 8fd1a4e5dac957d44e2ab3b0db8c4b85fa429e013b36dda52923b85bd4462dee not found: ID does not exist" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.388919 5049 scope.go:117] "RemoveContainer" containerID="5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4" Jan 27 17:13:29 crc kubenswrapper[5049]: E0127 17:13:29.392786 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4\": container with ID starting with 5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4 not found: ID does not exist" containerID="5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.392829 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4"} err="failed to get container status \"5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4\": rpc error: code = NotFound desc = could not find container \"5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4\": container with ID starting with 5031dd51cef46e4c8a877cb4d4419efa59281810797fc1e3ab6d30b2908770d4 not found: ID does not exist" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.392855 5049 scope.go:117] "RemoveContainer" containerID="cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5" Jan 27 17:13:29 crc kubenswrapper[5049]: E0127 17:13:29.393119 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5\": container with ID starting with cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5 not found: ID does not exist" containerID="cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.393149 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5"} err="failed to get container status \"cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5\": rpc error: code = NotFound desc = could not find container \"cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5\": container with ID starting with cbf433378b7aeb35d449fc7c180eb0aa827c1ec85b7437d5c2eb5e51d0311ae5 not found: ID does not exist" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.652743 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" path="/var/lib/kubelet/pods/4af949e8-9546-43ca-ae17-82238e2169f2/volumes" Jan 27 17:13:29 crc kubenswrapper[5049]: I0127 17:13:29.653781 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81f4771-673d-4440-8fd6-c964024aa07b" path="/var/lib/kubelet/pods/d81f4771-673d-4440-8fd6-c964024aa07b/volumes" Jan 27 17:13:36 crc kubenswrapper[5049]: I0127 17:13:36.403092 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-w4pq6" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.896062 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rwm66"] Jan 27 17:13:39 crc kubenswrapper[5049]: E0127 17:13:39.896784 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" containerName="extract-utilities" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.896811 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" containerName="extract-utilities" Jan 27 17:13:39 crc kubenswrapper[5049]: E0127 17:13:39.896844 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" containerName="registry-server" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.896855 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" containerName="registry-server" Jan 27 17:13:39 crc kubenswrapper[5049]: E0127 17:13:39.896880 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" containerName="extract-content" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.896897 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" containerName="extract-content" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.897065 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af949e8-9546-43ca-ae17-82238e2169f2" containerName="registry-server" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.899203 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rwm66" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.902712 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-c5mtc" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.902964 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.903137 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 17:13:39 crc kubenswrapper[5049]: I0127 17:13:39.918555 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rwm66"] Jan 27 17:13:40 crc kubenswrapper[5049]: I0127 17:13:40.000141 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rb42\" (UniqueName: \"kubernetes.io/projected/ada4a85b-e05c-4165-abb7-d6e9dee7bfef-kube-api-access-7rb42\") pod \"openstack-operator-index-rwm66\" (UID: \"ada4a85b-e05c-4165-abb7-d6e9dee7bfef\") " pod="openstack-operators/openstack-operator-index-rwm66" Jan 27 17:13:40 crc kubenswrapper[5049]: I0127 17:13:40.101940 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rb42\" (UniqueName: \"kubernetes.io/projected/ada4a85b-e05c-4165-abb7-d6e9dee7bfef-kube-api-access-7rb42\") pod \"openstack-operator-index-rwm66\" (UID: \"ada4a85b-e05c-4165-abb7-d6e9dee7bfef\") " pod="openstack-operators/openstack-operator-index-rwm66" Jan 27 17:13:40 crc kubenswrapper[5049]: I0127 17:13:40.121451 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rb42\" (UniqueName: \"kubernetes.io/projected/ada4a85b-e05c-4165-abb7-d6e9dee7bfef-kube-api-access-7rb42\") pod \"openstack-operator-index-rwm66\" (UID: \"ada4a85b-e05c-4165-abb7-d6e9dee7bfef\") " pod="openstack-operators/openstack-operator-index-rwm66" Jan 27 17:13:40 crc kubenswrapper[5049]: I0127 17:13:40.220959 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rwm66" Jan 27 17:13:40 crc kubenswrapper[5049]: I0127 17:13:40.669337 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rwm66"] Jan 27 17:13:40 crc kubenswrapper[5049]: W0127 17:13:40.679916 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada4a85b_e05c_4165_abb7_d6e9dee7bfef.slice/crio-56a65d673aa0a6a2d2577f0449accbbb60f25a4dc5475db0034359fd59d3b9eb WatchSource:0}: Error finding container 56a65d673aa0a6a2d2577f0449accbbb60f25a4dc5475db0034359fd59d3b9eb: Status 404 returned error can't find the container with id 56a65d673aa0a6a2d2577f0449accbbb60f25a4dc5475db0034359fd59d3b9eb Jan 27 17:13:41 crc kubenswrapper[5049]: I0127 17:13:41.384731 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rwm66" event={"ID":"ada4a85b-e05c-4165-abb7-d6e9dee7bfef","Type":"ContainerStarted","Data":"56a65d673aa0a6a2d2577f0449accbbb60f25a4dc5475db0034359fd59d3b9eb"} Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.074171 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rwm66"] Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.398447 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rwm66" event={"ID":"ada4a85b-e05c-4165-abb7-d6e9dee7bfef","Type":"ContainerStarted","Data":"375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e"} Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.398546 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-rwm66" podUID="ada4a85b-e05c-4165-abb7-d6e9dee7bfef" containerName="registry-server" containerID="cri-o://375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e" gracePeriod=2 Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.416989 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rwm66" podStartSLOduration=2.121305599 podStartE2EDuration="4.41695834s" podCreationTimestamp="2026-01-27 17:13:39 +0000 UTC" firstStartedPulling="2026-01-27 17:13:40.681267837 +0000 UTC m=+995.780241386" lastFinishedPulling="2026-01-27 17:13:42.976920568 +0000 UTC m=+998.075894127" observedRunningTime="2026-01-27 17:13:43.416625121 +0000 UTC m=+998.515598670" watchObservedRunningTime="2026-01-27 17:13:43.41695834 +0000 UTC m=+998.515931929" Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.701632 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fxk5l"] Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.702648 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.711609 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fxk5l"] Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.864608 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rwm66" Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.892440 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvh5\" (UniqueName: \"kubernetes.io/projected/385c67fc-30f8-409c-b59e-d5d7182730c8-kube-api-access-7dvh5\") pod \"openstack-operator-index-fxk5l\" (UID: \"385c67fc-30f8-409c-b59e-d5d7182730c8\") " pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.993900 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rb42\" (UniqueName: \"kubernetes.io/projected/ada4a85b-e05c-4165-abb7-d6e9dee7bfef-kube-api-access-7rb42\") pod \"ada4a85b-e05c-4165-abb7-d6e9dee7bfef\" (UID: \"ada4a85b-e05c-4165-abb7-d6e9dee7bfef\") " Jan 27 17:13:43 crc kubenswrapper[5049]: I0127 17:13:43.994241 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvh5\" (UniqueName: \"kubernetes.io/projected/385c67fc-30f8-409c-b59e-d5d7182730c8-kube-api-access-7dvh5\") pod \"openstack-operator-index-fxk5l\" (UID: \"385c67fc-30f8-409c-b59e-d5d7182730c8\") " pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.003386 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada4a85b-e05c-4165-abb7-d6e9dee7bfef-kube-api-access-7rb42" (OuterVolumeSpecName: "kube-api-access-7rb42") pod "ada4a85b-e05c-4165-abb7-d6e9dee7bfef" (UID: "ada4a85b-e05c-4165-abb7-d6e9dee7bfef"). InnerVolumeSpecName "kube-api-access-7rb42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.011859 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvh5\" (UniqueName: \"kubernetes.io/projected/385c67fc-30f8-409c-b59e-d5d7182730c8-kube-api-access-7dvh5\") pod \"openstack-operator-index-fxk5l\" (UID: \"385c67fc-30f8-409c-b59e-d5d7182730c8\") " pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.093789 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.095189 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rb42\" (UniqueName: \"kubernetes.io/projected/ada4a85b-e05c-4165-abb7-d6e9dee7bfef-kube-api-access-7rb42\") on node \"crc\" DevicePath \"\"" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.404910 5049 generic.go:334] "Generic (PLEG): container finished" podID="ada4a85b-e05c-4165-abb7-d6e9dee7bfef" containerID="375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e" exitCode=0 Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.404991 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rwm66" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.405003 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rwm66" event={"ID":"ada4a85b-e05c-4165-abb7-d6e9dee7bfef","Type":"ContainerDied","Data":"375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e"} Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.405303 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rwm66" event={"ID":"ada4a85b-e05c-4165-abb7-d6e9dee7bfef","Type":"ContainerDied","Data":"56a65d673aa0a6a2d2577f0449accbbb60f25a4dc5475db0034359fd59d3b9eb"} Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.405328 5049 scope.go:117] "RemoveContainer" containerID="375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.433946 5049 scope.go:117] "RemoveContainer" containerID="375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.434451 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-rwm66"] Jan 27 17:13:44 crc kubenswrapper[5049]: E0127 17:13:44.435368 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e\": container with ID starting with 375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e not found: ID does not exist" containerID="375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.435416 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e"} err="failed to get container status \"375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e\": rpc error: code = NotFound desc = could not find container \"375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e\": container with ID starting with 375ceb3a1399ae43573c89c7d67e2945948c4b65669514d5817ddff0dc9ddf7e not found: ID does not exist" Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.441980 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-rwm66"] Jan 27 17:13:44 crc kubenswrapper[5049]: I0127 17:13:44.590829 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fxk5l"] Jan 27 17:13:44 crc kubenswrapper[5049]: W0127 17:13:44.598847 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod385c67fc_30f8_409c_b59e_d5d7182730c8.slice/crio-71fcf5906ef8efdc635697ea4c0c5bdb76d4aecb36769a509a8ba976c220e1c9 WatchSource:0}: Error finding container 71fcf5906ef8efdc635697ea4c0c5bdb76d4aecb36769a509a8ba976c220e1c9: Status 404 returned error can't find the container with id 71fcf5906ef8efdc635697ea4c0c5bdb76d4aecb36769a509a8ba976c220e1c9 Jan 27 17:13:45 crc kubenswrapper[5049]: I0127 17:13:45.415944 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxk5l" event={"ID":"385c67fc-30f8-409c-b59e-d5d7182730c8","Type":"ContainerStarted","Data":"963e443b6fd112e979fc085fa2b974291fb1569c6a3eb5b223311da971fdf20c"} Jan 27 17:13:45 crc kubenswrapper[5049]: I0127 17:13:45.417285 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fxk5l" event={"ID":"385c67fc-30f8-409c-b59e-d5d7182730c8","Type":"ContainerStarted","Data":"71fcf5906ef8efdc635697ea4c0c5bdb76d4aecb36769a509a8ba976c220e1c9"} Jan 27 17:13:45 crc kubenswrapper[5049]: I0127 17:13:45.436090 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fxk5l" podStartSLOduration=2.372172151 podStartE2EDuration="2.436072523s" podCreationTimestamp="2026-01-27 17:13:43 +0000 UTC" firstStartedPulling="2026-01-27 17:13:44.602158559 +0000 UTC m=+999.701132108" lastFinishedPulling="2026-01-27 17:13:44.666058931 +0000 UTC m=+999.765032480" observedRunningTime="2026-01-27 17:13:45.434291842 +0000 UTC m=+1000.533265401" watchObservedRunningTime="2026-01-27 17:13:45.436072523 +0000 UTC m=+1000.535046082" Jan 27 17:13:45 crc kubenswrapper[5049]: I0127 17:13:45.657244 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada4a85b-e05c-4165-abb7-d6e9dee7bfef" path="/var/lib/kubelet/pods/ada4a85b-e05c-4165-abb7-d6e9dee7bfef/volumes" Jan 27 17:13:47 crc kubenswrapper[5049]: I0127 17:13:47.781783 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:13:47 crc kubenswrapper[5049]: I0127 17:13:47.782153 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:13:54 crc kubenswrapper[5049]: I0127 17:13:54.094867 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:13:54 crc kubenswrapper[5049]: I0127 17:13:54.095337 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:13:54 crc kubenswrapper[5049]: I0127 17:13:54.140834 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:13:54 crc kubenswrapper[5049]: I0127 17:13:54.525935 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fxk5l" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.172821 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx"] Jan 27 17:14:02 crc kubenswrapper[5049]: E0127 17:14:02.173878 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada4a85b-e05c-4165-abb7-d6e9dee7bfef" containerName="registry-server" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.173901 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada4a85b-e05c-4165-abb7-d6e9dee7bfef" containerName="registry-server" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.174114 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada4a85b-e05c-4165-abb7-d6e9dee7bfef" containerName="registry-server" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.175723 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.179379 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tpzrx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.183544 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx"] Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.363020 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-util\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.363106 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxb5\" (UniqueName: \"kubernetes.io/projected/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-kube-api-access-vdxb5\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.363143 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-bundle\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.465108 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxb5\" (UniqueName: \"kubernetes.io/projected/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-kube-api-access-vdxb5\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.465184 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-bundle\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.465373 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-util\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.466257 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-bundle\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.466358 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-util\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.491542 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxb5\" (UniqueName: \"kubernetes.io/projected/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-kube-api-access-vdxb5\") pod \"5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:02 crc kubenswrapper[5049]: I0127 17:14:02.508619 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:03 crc kubenswrapper[5049]: I0127 17:14:03.025791 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx"] Jan 27 17:14:03 crc kubenswrapper[5049]: I0127 17:14:03.551603 5049 generic.go:334] "Generic (PLEG): container finished" podID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerID="bc6aa09a3c15a311372a32df6e6186ce618ae2375004ab5d52b5a40726ecd650" exitCode=0 Jan 27 17:14:03 crc kubenswrapper[5049]: I0127 17:14:03.551860 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" event={"ID":"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9","Type":"ContainerDied","Data":"bc6aa09a3c15a311372a32df6e6186ce618ae2375004ab5d52b5a40726ecd650"} Jan 27 17:14:03 crc kubenswrapper[5049]: I0127 17:14:03.551984 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" event={"ID":"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9","Type":"ContainerStarted","Data":"a847c98f1db62cd6f624b2c53f1c4f2f8f62e67f30c93a5e2c93aba58bd59c43"} Jan 27 17:14:04 crc kubenswrapper[5049]: I0127 17:14:04.561874 5049 generic.go:334] "Generic (PLEG): container finished" podID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerID="b77da02d239ff0717942d5e16b07c895d98689f7a5245a95ee3f0d5e58a4e997" exitCode=0 Jan 27 17:14:04 crc kubenswrapper[5049]: I0127 17:14:04.561920 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" event={"ID":"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9","Type":"ContainerDied","Data":"b77da02d239ff0717942d5e16b07c895d98689f7a5245a95ee3f0d5e58a4e997"} Jan 27 17:14:05 crc kubenswrapper[5049]: I0127 17:14:05.570121 5049 generic.go:334] "Generic (PLEG): container finished" podID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerID="3bda4fa288c4d926b5bcb83d988708134ed78b759ac3a27b29a04eb87ec3e8df" exitCode=0 Jan 27 17:14:05 crc kubenswrapper[5049]: I0127 17:14:05.570240 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" event={"ID":"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9","Type":"ContainerDied","Data":"3bda4fa288c4d926b5bcb83d988708134ed78b759ac3a27b29a04eb87ec3e8df"} Jan 27 17:14:06 crc kubenswrapper[5049]: I0127 17:14:06.930952 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.051895 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-bundle\") pod \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.051993 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdxb5\" (UniqueName: \"kubernetes.io/projected/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-kube-api-access-vdxb5\") pod \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.052110 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-util\") pod \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\" (UID: \"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9\") " Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.052946 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-bundle" (OuterVolumeSpecName: "bundle") pod "78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" (UID: "78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.058293 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-kube-api-access-vdxb5" (OuterVolumeSpecName: "kube-api-access-vdxb5") pod "78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" (UID: "78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9"). InnerVolumeSpecName "kube-api-access-vdxb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.080435 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-util" (OuterVolumeSpecName: "util") pod "78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" (UID: "78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.153969 5049 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-util\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.153998 5049 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.154008 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdxb5\" (UniqueName: \"kubernetes.io/projected/78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9-kube-api-access-vdxb5\") on node \"crc\" DevicePath \"\"" Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.591123 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" event={"ID":"78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9","Type":"ContainerDied","Data":"a847c98f1db62cd6f624b2c53f1c4f2f8f62e67f30c93a5e2c93aba58bd59c43"} Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.591168 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a847c98f1db62cd6f624b2c53f1c4f2f8f62e67f30c93a5e2c93aba58bd59c43" Jan 27 17:14:07 crc kubenswrapper[5049]: I0127 17:14:07.591354 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.365267 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x"] Jan 27 17:14:10 crc kubenswrapper[5049]: E0127 17:14:10.365794 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerName="pull" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.365811 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerName="pull" Jan 27 17:14:10 crc kubenswrapper[5049]: E0127 17:14:10.365826 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerName="util" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.365834 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerName="util" Jan 27 17:14:10 crc kubenswrapper[5049]: E0127 17:14:10.365843 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerName="extract" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.365852 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerName="extract" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.365979 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9" containerName="extract" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.366494 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.372296 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-46qpp" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.422115 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x"] Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.497924 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp525\" (UniqueName: \"kubernetes.io/projected/e89211f0-4414-462f-b634-68ebf429f864-kube-api-access-jp525\") pod \"openstack-operator-controller-init-7f484b79bf-6tn6x\" (UID: \"e89211f0-4414-462f-b634-68ebf429f864\") " pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.599524 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp525\" (UniqueName: \"kubernetes.io/projected/e89211f0-4414-462f-b634-68ebf429f864-kube-api-access-jp525\") pod \"openstack-operator-controller-init-7f484b79bf-6tn6x\" (UID: \"e89211f0-4414-462f-b634-68ebf429f864\") " pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.624922 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp525\" (UniqueName: \"kubernetes.io/projected/e89211f0-4414-462f-b634-68ebf429f864-kube-api-access-jp525\") pod \"openstack-operator-controller-init-7f484b79bf-6tn6x\" (UID: \"e89211f0-4414-462f-b634-68ebf429f864\") " pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" Jan 27 17:14:10 crc kubenswrapper[5049]: I0127 17:14:10.690268 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" Jan 27 17:14:11 crc kubenswrapper[5049]: I0127 17:14:11.131354 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x"] Jan 27 17:14:11 crc kubenswrapper[5049]: I0127 17:14:11.627289 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" event={"ID":"e89211f0-4414-462f-b634-68ebf429f864","Type":"ContainerStarted","Data":"79e0bc67f79b3ef4ca97585149335d309e58b21da9f223a9fdb2246dc2589e7c"} Jan 27 17:14:15 crc kubenswrapper[5049]: I0127 17:14:15.674474 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" event={"ID":"e89211f0-4414-462f-b634-68ebf429f864","Type":"ContainerStarted","Data":"3d88bd144adc298adcc0f2d0586015b66244eb9af17e05ac30c843098e049c49"} Jan 27 17:14:15 crc kubenswrapper[5049]: I0127 17:14:15.675055 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" Jan 27 17:14:15 crc kubenswrapper[5049]: I0127 17:14:15.731724 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" podStartSLOduration=1.920914258 podStartE2EDuration="5.731707366s" podCreationTimestamp="2026-01-27 17:14:10 +0000 UTC" firstStartedPulling="2026-01-27 17:14:11.141491564 +0000 UTC m=+1026.240465113" lastFinishedPulling="2026-01-27 17:14:14.952284672 +0000 UTC m=+1030.051258221" observedRunningTime="2026-01-27 17:14:15.728301968 +0000 UTC m=+1030.827275517" watchObservedRunningTime="2026-01-27 17:14:15.731707366 +0000 UTC m=+1030.830680915" Jan 27 17:14:17 crc kubenswrapper[5049]: I0127 17:14:17.781614 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:14:17 crc kubenswrapper[5049]: I0127 17:14:17.782019 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:14:20 crc kubenswrapper[5049]: I0127 17:14:20.694403 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f484b79bf-6tn6x" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.896903 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.898513 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.900785 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6xdxk" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.902963 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.903916 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.905344 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qrc55" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.915506 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.922587 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.928661 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.929525 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.932163 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jhx8j" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.935943 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.936775 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.938524 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lm8t2" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.944321 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.945310 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.950092 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.950137 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-gqrn6" Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.954657 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm"] Jan 27 17:14:38 crc kubenswrapper[5049]: I0127 17:14:38.970419 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.005856 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54csw\" (UniqueName: \"kubernetes.io/projected/1046a5c5-7064-4f3d-8a27-4c70edefff18-kube-api-access-54csw\") pod \"cinder-operator-controller-manager-655bf9cfbb-6fxd4\" (UID: \"1046a5c5-7064-4f3d-8a27-4c70edefff18\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.005956 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sxd8\" (UniqueName: \"kubernetes.io/projected/6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97-kube-api-access-7sxd8\") pod \"glance-operator-controller-manager-67dd55ff59-6gzdm\" (UID: \"6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.005989 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2k22\" (UniqueName: \"kubernetes.io/projected/713c4f3d-0a16-43b1-a9ba-52f2905863b7-kube-api-access-m2k22\") pod \"designate-operator-controller-manager-77554cdc5c-pb687\" (UID: \"713c4f3d-0a16-43b1-a9ba-52f2905863b7\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.006036 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kt6w\" (UniqueName: \"kubernetes.io/projected/cf5055fa-99ac-4063-b192-5743f331b01a-kube-api-access-2kt6w\") pod \"barbican-operator-controller-manager-65ff799cfd-8d8dj\" (UID: \"cf5055fa-99ac-4063-b192-5743f331b01a\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.006103 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.006111 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7pv\" (UniqueName: \"kubernetes.io/projected/5821c221-c684-480a-a174-2154d785d9be-kube-api-access-wl7pv\") pod \"heat-operator-controller-manager-575ffb885b-jbx77\" (UID: \"5821c221-c684-480a-a174-2154d785d9be\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.007064 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.016201 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rm4sk" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.039726 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.045111 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.045997 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.051786 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.052409 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2wmz7" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.058568 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.065372 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.066278 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.073008 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.073724 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.075699 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kfn8t" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.076791 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-sjm5n" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.084740 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.090409 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.096240 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.097176 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.098751 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-dffsc" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.107356 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kt6w\" (UniqueName: \"kubernetes.io/projected/cf5055fa-99ac-4063-b192-5743f331b01a-kube-api-access-2kt6w\") pod \"barbican-operator-controller-manager-65ff799cfd-8d8dj\" (UID: \"cf5055fa-99ac-4063-b192-5743f331b01a\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.107389 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7pv\" (UniqueName: \"kubernetes.io/projected/5821c221-c684-480a-a174-2154d785d9be-kube-api-access-wl7pv\") pod \"heat-operator-controller-manager-575ffb885b-jbx77\" (UID: \"5821c221-c684-480a-a174-2154d785d9be\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.107433 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54csw\" (UniqueName: \"kubernetes.io/projected/1046a5c5-7064-4f3d-8a27-4c70edefff18-kube-api-access-54csw\") pod \"cinder-operator-controller-manager-655bf9cfbb-6fxd4\" (UID: \"1046a5c5-7064-4f3d-8a27-4c70edefff18\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.107478 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sxd8\" (UniqueName: \"kubernetes.io/projected/6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97-kube-api-access-7sxd8\") pod \"glance-operator-controller-manager-67dd55ff59-6gzdm\" (UID: \"6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.107500 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2k22\" (UniqueName: \"kubernetes.io/projected/713c4f3d-0a16-43b1-a9ba-52f2905863b7-kube-api-access-m2k22\") pod \"designate-operator-controller-manager-77554cdc5c-pb687\" (UID: \"713c4f3d-0a16-43b1-a9ba-52f2905863b7\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.111293 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.112223 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.115313 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9wpzl" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.135025 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.154922 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7pv\" (UniqueName: \"kubernetes.io/projected/5821c221-c684-480a-a174-2154d785d9be-kube-api-access-wl7pv\") pod \"heat-operator-controller-manager-575ffb885b-jbx77\" (UID: \"5821c221-c684-480a-a174-2154d785d9be\") " pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.154957 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2k22\" (UniqueName: \"kubernetes.io/projected/713c4f3d-0a16-43b1-a9ba-52f2905863b7-kube-api-access-m2k22\") pod \"designate-operator-controller-manager-77554cdc5c-pb687\" (UID: \"713c4f3d-0a16-43b1-a9ba-52f2905863b7\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.180174 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54csw\" (UniqueName: \"kubernetes.io/projected/1046a5c5-7064-4f3d-8a27-4c70edefff18-kube-api-access-54csw\") pod \"cinder-operator-controller-manager-655bf9cfbb-6fxd4\" (UID: \"1046a5c5-7064-4f3d-8a27-4c70edefff18\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.184783 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sxd8\" (UniqueName: \"kubernetes.io/projected/6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97-kube-api-access-7sxd8\") pod \"glance-operator-controller-manager-67dd55ff59-6gzdm\" (UID: \"6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.185367 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kt6w\" (UniqueName: \"kubernetes.io/projected/cf5055fa-99ac-4063-b192-5743f331b01a-kube-api-access-2kt6w\") pod \"barbican-operator-controller-manager-65ff799cfd-8d8dj\" (UID: \"cf5055fa-99ac-4063-b192-5743f331b01a\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.216828 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.217429 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdl25\" (UniqueName: \"kubernetes.io/projected/b58745a3-9e9d-4337-8965-6caf2ade0bdd-kube-api-access-wdl25\") pod \"ironic-operator-controller-manager-768b776ffb-pqbc2\" (UID: \"b58745a3-9e9d-4337-8965-6caf2ade0bdd\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.217466 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.217490 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hkbc\" (UniqueName: \"kubernetes.io/projected/61d8b171-8120-44cb-a074-54ea2aea3735-kube-api-access-4hkbc\") pod \"horizon-operator-controller-manager-77d5c5b54f-hzzj2\" (UID: \"61d8b171-8120-44cb-a074-54ea2aea3735\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.217520 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g45wb\" (UniqueName: \"kubernetes.io/projected/d5477f31-e31c-47a2-bbaf-543196a1908e-kube-api-access-g45wb\") pod \"manila-operator-controller-manager-849fcfbb6b-tdbpz\" (UID: \"d5477f31-e31c-47a2-bbaf-543196a1908e\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.217550 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5jq6\" (UniqueName: \"kubernetes.io/projected/f401ecb9-876a-4bed-9848-1ab332f71010-kube-api-access-q5jq6\") pod \"keystone-operator-controller-manager-55f684fd56-qjftl\" (UID: \"f401ecb9-876a-4bed-9848-1ab332f71010\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.217599 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mgl\" (UniqueName: \"kubernetes.io/projected/61d18054-36c7-4e08-a20d-7dd2bb853959-kube-api-access-c5mgl\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.218647 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.219360 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.223694 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-v6f9x" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.224516 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.249896 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.255812 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.258416 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.259612 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.265077 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-j7fst" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.284816 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.285390 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.285629 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.294894 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.295281 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.296126 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.299443 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rlv92" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.312855 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.313995 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.320364 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.320700 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hkbc\" (UniqueName: \"kubernetes.io/projected/61d8b171-8120-44cb-a074-54ea2aea3735-kube-api-access-4hkbc\") pod \"horizon-operator-controller-manager-77d5c5b54f-hzzj2\" (UID: \"61d8b171-8120-44cb-a074-54ea2aea3735\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.320876 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g45wb\" (UniqueName: \"kubernetes.io/projected/d5477f31-e31c-47a2-bbaf-543196a1908e-kube-api-access-g45wb\") pod \"manila-operator-controller-manager-849fcfbb6b-tdbpz\" (UID: \"d5477f31-e31c-47a2-bbaf-543196a1908e\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.320993 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8gj\" (UniqueName: \"kubernetes.io/projected/03f2c7d7-4a4d-479c-aace-8cb3f75f5a34-kube-api-access-rd8gj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-45drh\" (UID: \"03f2c7d7-4a4d-479c-aace-8cb3f75f5a34\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.321092 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5jq6\" (UniqueName: \"kubernetes.io/projected/f401ecb9-876a-4bed-9848-1ab332f71010-kube-api-access-q5jq6\") pod \"keystone-operator-controller-manager-55f684fd56-qjftl\" (UID: \"f401ecb9-876a-4bed-9848-1ab332f71010\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.321229 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5mgl\" (UniqueName: \"kubernetes.io/projected/61d18054-36c7-4e08-a20d-7dd2bb853959-kube-api-access-c5mgl\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.321362 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdl25\" (UniqueName: \"kubernetes.io/projected/b58745a3-9e9d-4337-8965-6caf2ade0bdd-kube-api-access-wdl25\") pod \"ironic-operator-controller-manager-768b776ffb-pqbc2\" (UID: \"b58745a3-9e9d-4337-8965-6caf2ade0bdd\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.321958 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 17:14:39 crc kubenswrapper[5049]: E0127 17:14:39.322084 5049 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:39 crc kubenswrapper[5049]: E0127 17:14:39.322127 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert podName:61d18054-36c7-4e08-a20d-7dd2bb853959 nodeName:}" failed. No retries permitted until 2026-01-27 17:14:39.822112853 +0000 UTC m=+1054.921086392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert") pod "infra-operator-controller-manager-7d75bc88d5-hdd85" (UID: "61d18054-36c7-4e08-a20d-7dd2bb853959") : secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.322172 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zmspl" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.325810 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.335580 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.336474 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.340190 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.341161 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.344268 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.345107 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.346115 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dj4cm" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.346362 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6l2vh" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.346789 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5jq6\" (UniqueName: \"kubernetes.io/projected/f401ecb9-876a-4bed-9848-1ab332f71010-kube-api-access-q5jq6\") pod \"keystone-operator-controller-manager-55f684fd56-qjftl\" (UID: \"f401ecb9-876a-4bed-9848-1ab332f71010\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.349049 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vqjwj" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.349471 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5mgl\" (UniqueName: \"kubernetes.io/projected/61d18054-36c7-4e08-a20d-7dd2bb853959-kube-api-access-c5mgl\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.349549 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hkbc\" (UniqueName: \"kubernetes.io/projected/61d8b171-8120-44cb-a074-54ea2aea3735-kube-api-access-4hkbc\") pod \"horizon-operator-controller-manager-77d5c5b54f-hzzj2\" (UID: \"61d8b171-8120-44cb-a074-54ea2aea3735\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.353152 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g45wb\" (UniqueName: \"kubernetes.io/projected/d5477f31-e31c-47a2-bbaf-543196a1908e-kube-api-access-g45wb\") pod \"manila-operator-controller-manager-849fcfbb6b-tdbpz\" (UID: \"d5477f31-e31c-47a2-bbaf-543196a1908e\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.354939 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdl25\" (UniqueName: \"kubernetes.io/projected/b58745a3-9e9d-4337-8965-6caf2ade0bdd-kube-api-access-wdl25\") pod \"ironic-operator-controller-manager-768b776ffb-pqbc2\" (UID: \"b58745a3-9e9d-4337-8965-6caf2ade0bdd\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.358137 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.362899 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.389820 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.405340 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.405715 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.413269 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.417846 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.423875 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk8jn\" (UniqueName: \"kubernetes.io/projected/ac863301-b663-4e76-83af-5b1596a19d5a-kube-api-access-fk8jn\") pod \"ovn-operator-controller-manager-6f75f45d54-shfpw\" (UID: \"ac863301-b663-4e76-83af-5b1596a19d5a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.423955 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99qv\" (UniqueName: \"kubernetes.io/projected/42f93a5d-5678-4e1e-b5a0-1bd0017dab7c-kube-api-access-l99qv\") pod \"swift-operator-controller-manager-547cbdb99f-cz6ks\" (UID: \"42f93a5d-5678-4e1e-b5a0-1bd0017dab7c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.423994 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4g62\" (UniqueName: \"kubernetes.io/projected/640dbea5-d940-4917-ba99-8b506007a8c8-kube-api-access-h4g62\") pod \"placement-operator-controller-manager-79d5ccc684-b7gkd\" (UID: \"640dbea5-d940-4917-ba99-8b506007a8c8\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.424047 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4479s\" (UniqueName: \"kubernetes.io/projected/89eae96a-ae74-441c-b4f4-6423b01e11c9-kube-api-access-4479s\") pod \"nova-operator-controller-manager-ddcbfd695-24wdf\" (UID: \"89eae96a-ae74-441c-b4f4-6423b01e11c9\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.424069 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.424101 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxhl2\" (UniqueName: \"kubernetes.io/projected/596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82-kube-api-access-sxhl2\") pod \"neutron-operator-controller-manager-7ffd8d76d4-jmr5r\" (UID: \"596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.424132 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrdf\" (UniqueName: \"kubernetes.io/projected/954304ce-c36e-4eec-989f-a56c4d63f97e-kube-api-access-xhrdf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.424194 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8tq\" (UniqueName: \"kubernetes.io/projected/1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf-kube-api-access-fj8tq\") pod \"octavia-operator-controller-manager-7875d7675-zxh24\" (UID: \"1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.424223 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8gj\" (UniqueName: \"kubernetes.io/projected/03f2c7d7-4a4d-479c-aace-8cb3f75f5a34-kube-api-access-rd8gj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-45drh\" (UID: \"03f2c7d7-4a4d-479c-aace-8cb3f75f5a34\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.442665 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.443654 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.450517 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sqzcj" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.454689 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8gj\" (UniqueName: \"kubernetes.io/projected/03f2c7d7-4a4d-479c-aace-8cb3f75f5a34-kube-api-access-rd8gj\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-45drh\" (UID: \"03f2c7d7-4a4d-479c-aace-8cb3f75f5a34\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.495908 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525301 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4479s\" (UniqueName: \"kubernetes.io/projected/89eae96a-ae74-441c-b4f4-6423b01e11c9-kube-api-access-4479s\") pod \"nova-operator-controller-manager-ddcbfd695-24wdf\" (UID: \"89eae96a-ae74-441c-b4f4-6423b01e11c9\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525347 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525377 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxhl2\" (UniqueName: \"kubernetes.io/projected/596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82-kube-api-access-sxhl2\") pod \"neutron-operator-controller-manager-7ffd8d76d4-jmr5r\" (UID: \"596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525403 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhrdf\" (UniqueName: \"kubernetes.io/projected/954304ce-c36e-4eec-989f-a56c4d63f97e-kube-api-access-xhrdf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525455 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8tq\" (UniqueName: \"kubernetes.io/projected/1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf-kube-api-access-fj8tq\") pod \"octavia-operator-controller-manager-7875d7675-zxh24\" (UID: \"1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525479 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk8jn\" (UniqueName: \"kubernetes.io/projected/ac863301-b663-4e76-83af-5b1596a19d5a-kube-api-access-fk8jn\") pod \"ovn-operator-controller-manager-6f75f45d54-shfpw\" (UID: \"ac863301-b663-4e76-83af-5b1596a19d5a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525507 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99qv\" (UniqueName: \"kubernetes.io/projected/42f93a5d-5678-4e1e-b5a0-1bd0017dab7c-kube-api-access-l99qv\") pod \"swift-operator-controller-manager-547cbdb99f-cz6ks\" (UID: \"42f93a5d-5678-4e1e-b5a0-1bd0017dab7c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525540 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4g62\" (UniqueName: \"kubernetes.io/projected/640dbea5-d940-4917-ba99-8b506007a8c8-kube-api-access-h4g62\") pod \"placement-operator-controller-manager-79d5ccc684-b7gkd\" (UID: \"640dbea5-d940-4917-ba99-8b506007a8c8\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.525583 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" Jan 27 17:14:39 crc kubenswrapper[5049]: E0127 17:14:39.526244 5049 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:39 crc kubenswrapper[5049]: E0127 17:14:39.526426 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert podName:954304ce-c36e-4eec-989f-a56c4d63f97e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:40.026410331 +0000 UTC m=+1055.125383880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" (UID: "954304ce-c36e-4eec-989f-a56c4d63f97e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.588464 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4g62\" (UniqueName: \"kubernetes.io/projected/640dbea5-d940-4917-ba99-8b506007a8c8-kube-api-access-h4g62\") pod \"placement-operator-controller-manager-79d5ccc684-b7gkd\" (UID: \"640dbea5-d940-4917-ba99-8b506007a8c8\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.589140 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8tq\" (UniqueName: \"kubernetes.io/projected/1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf-kube-api-access-fj8tq\") pod \"octavia-operator-controller-manager-7875d7675-zxh24\" (UID: \"1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.595850 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99qv\" (UniqueName: \"kubernetes.io/projected/42f93a5d-5678-4e1e-b5a0-1bd0017dab7c-kube-api-access-l99qv\") pod \"swift-operator-controller-manager-547cbdb99f-cz6ks\" (UID: \"42f93a5d-5678-4e1e-b5a0-1bd0017dab7c\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.598481 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhrdf\" (UniqueName: \"kubernetes.io/projected/954304ce-c36e-4eec-989f-a56c4d63f97e-kube-api-access-xhrdf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.600283 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk8jn\" (UniqueName: \"kubernetes.io/projected/ac863301-b663-4e76-83af-5b1596a19d5a-kube-api-access-fk8jn\") pod \"ovn-operator-controller-manager-6f75f45d54-shfpw\" (UID: \"ac863301-b663-4e76-83af-5b1596a19d5a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.600495 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxhl2\" (UniqueName: \"kubernetes.io/projected/596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82-kube-api-access-sxhl2\") pod \"neutron-operator-controller-manager-7ffd8d76d4-jmr5r\" (UID: \"596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.606543 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.607324 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4479s\" (UniqueName: \"kubernetes.io/projected/89eae96a-ae74-441c-b4f4-6423b01e11c9-kube-api-access-4479s\") pod \"nova-operator-controller-manager-ddcbfd695-24wdf\" (UID: \"89eae96a-ae74-441c-b4f4-6423b01e11c9\") " pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.607449 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.608877 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.611385 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dp6vj" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.633279 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7mg\" (UniqueName: \"kubernetes.io/projected/23d834fc-5840-49a4-aa49-5a84e8490e39-kube-api-access-9q7mg\") pod \"telemetry-operator-controller-manager-799bc87c89-dzmjg\" (UID: \"23d834fc-5840-49a4-aa49-5a84e8490e39\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.633500 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4npdn\" (UniqueName: \"kubernetes.io/projected/63489758-4a49-40f2-8886-7c09cc103f40-kube-api-access-4npdn\") pod \"test-operator-controller-manager-69797bbcbd-qrvc5\" (UID: \"63489758-4a49-40f2-8886-7c09cc103f40\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.638012 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.660193 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.685080 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.748578 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4npdn\" (UniqueName: \"kubernetes.io/projected/63489758-4a49-40f2-8886-7c09cc103f40-kube-api-access-4npdn\") pod \"test-operator-controller-manager-69797bbcbd-qrvc5\" (UID: \"63489758-4a49-40f2-8886-7c09cc103f40\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.748712 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q7mg\" (UniqueName: \"kubernetes.io/projected/23d834fc-5840-49a4-aa49-5a84e8490e39-kube-api-access-9q7mg\") pod \"telemetry-operator-controller-manager-799bc87c89-dzmjg\" (UID: \"23d834fc-5840-49a4-aa49-5a84e8490e39\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.757404 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.759454 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.759520 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.760151 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.760171 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.760511 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.760766 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.767642 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2ffzb" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.768950 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.769079 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.770925 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.771890 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-k8k87" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.782624 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q7mg\" (UniqueName: \"kubernetes.io/projected/23d834fc-5840-49a4-aa49-5a84e8490e39-kube-api-access-9q7mg\") pod \"telemetry-operator-controller-manager-799bc87c89-dzmjg\" (UID: \"23d834fc-5840-49a4-aa49-5a84e8490e39\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.794121 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.797821 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.800995 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.802281 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4npdn\" (UniqueName: \"kubernetes.io/projected/63489758-4a49-40f2-8886-7c09cc103f40-kube-api-access-4npdn\") pod \"test-operator-controller-manager-69797bbcbd-qrvc5\" (UID: \"63489758-4a49-40f2-8886-7c09cc103f40\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.810589 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-cfhx6" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.812587 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr"] Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.844903 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.850230 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:39 crc kubenswrapper[5049]: E0127 17:14:39.851083 5049 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:39 crc kubenswrapper[5049]: E0127 17:14:39.851123 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert podName:61d18054-36c7-4e08-a20d-7dd2bb853959 nodeName:}" failed. No retries permitted until 2026-01-27 17:14:40.85111 +0000 UTC m=+1055.950083549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert") pod "infra-operator-controller-manager-7d75bc88d5-hdd85" (UID: "61d18054-36c7-4e08-a20d-7dd2bb853959") : secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.865594 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.946769 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.952081 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.959537 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5stkr\" (UniqueName: \"kubernetes.io/projected/d0a4b955-b7b5-45be-a997-7ed2d360218e-kube-api-access-5stkr\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.959605 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5h5k\" (UniqueName: \"kubernetes.io/projected/89c673d2-c27b-4b39-bb48-b463d5626491-kube-api-access-z5h5k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-26ckr\" (UID: \"89c673d2-c27b-4b39-bb48-b463d5626491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.959651 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pp4c\" (UniqueName: \"kubernetes.io/projected/0b239d42-1dea-4559-8cdd-8db8cb8addab-kube-api-access-9pp4c\") pod \"watcher-operator-controller-manager-6c9bb4b66c-rxn7s\" (UID: \"0b239d42-1dea-4559-8cdd-8db8cb8addab\") " pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" Jan 27 17:14:39 crc kubenswrapper[5049]: I0127 17:14:39.959741 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.064392 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.064483 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.064515 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.064535 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5stkr\" (UniqueName: \"kubernetes.io/projected/d0a4b955-b7b5-45be-a997-7ed2d360218e-kube-api-access-5stkr\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.064559 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5h5k\" (UniqueName: \"kubernetes.io/projected/89c673d2-c27b-4b39-bb48-b463d5626491-kube-api-access-z5h5k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-26ckr\" (UID: \"89c673d2-c27b-4b39-bb48-b463d5626491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.064587 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pp4c\" (UniqueName: \"kubernetes.io/projected/0b239d42-1dea-4559-8cdd-8db8cb8addab-kube-api-access-9pp4c\") pod \"watcher-operator-controller-manager-6c9bb4b66c-rxn7s\" (UID: \"0b239d42-1dea-4559-8cdd-8db8cb8addab\") " pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.065017 5049 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.065061 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:40.565047554 +0000 UTC m=+1055.664021103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "webhook-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.065193 5049 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.065215 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:40.565208369 +0000 UTC m=+1055.664181918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "metrics-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.065248 5049 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.065286 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert podName:954304ce-c36e-4eec-989f-a56c4d63f97e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:41.06526054 +0000 UTC m=+1056.164234079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" (UID: "954304ce-c36e-4eec-989f-a56c4d63f97e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.089545 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pp4c\" (UniqueName: \"kubernetes.io/projected/0b239d42-1dea-4559-8cdd-8db8cb8addab-kube-api-access-9pp4c\") pod \"watcher-operator-controller-manager-6c9bb4b66c-rxn7s\" (UID: \"0b239d42-1dea-4559-8cdd-8db8cb8addab\") " pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.094371 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5stkr\" (UniqueName: \"kubernetes.io/projected/d0a4b955-b7b5-45be-a997-7ed2d360218e-kube-api-access-5stkr\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.103094 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5h5k\" (UniqueName: \"kubernetes.io/projected/89c673d2-c27b-4b39-bb48-b463d5626491-kube-api-access-z5h5k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-26ckr\" (UID: \"89c673d2-c27b-4b39-bb48-b463d5626491\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.186343 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.210782 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm"] Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.226574 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.400607 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj"] Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.570782 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.570880 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.571040 5049 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.571106 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:41.571089939 +0000 UTC m=+1056.670063488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "metrics-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.572184 5049 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.572234 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:41.572225242 +0000 UTC m=+1056.671198791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "webhook-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.832545 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" event={"ID":"6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97","Type":"ContainerStarted","Data":"f216ad59a84a1c2ead22a7806e88ed6b05d7a63dcdfdd696f9408cb10b3a0fd9"} Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.833759 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" event={"ID":"cf5055fa-99ac-4063-b192-5743f331b01a","Type":"ContainerStarted","Data":"52d178a6293a420cfbe182afa00d502a60b182760909e30d48dd46bba024908b"} Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.875335 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.875568 5049 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: E0127 17:14:40.875704 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert podName:61d18054-36c7-4e08-a20d-7dd2bb853959 nodeName:}" failed. No retries permitted until 2026-01-27 17:14:42.875649717 +0000 UTC m=+1057.974623326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert") pod "infra-operator-controller-manager-7d75bc88d5-hdd85" (UID: "61d18054-36c7-4e08-a20d-7dd2bb853959") : secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.898931 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4"] Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.919958 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2"] Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.950542 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24"] Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.960466 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks"] Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.973718 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl"] Jan 27 17:14:40 crc kubenswrapper[5049]: W0127 17:14:40.978607 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf401ecb9_876a_4bed_9848_1ab332f71010.slice/crio-512a5b2a497dd1090dc3f80f3b6bc34cff5943c02916bbd4346cb603f709cc21 WatchSource:0}: Error finding container 512a5b2a497dd1090dc3f80f3b6bc34cff5943c02916bbd4346cb603f709cc21: Status 404 returned error can't find the container with id 512a5b2a497dd1090dc3f80f3b6bc34cff5943c02916bbd4346cb603f709cc21 Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.984140 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77"] Jan 27 17:14:40 crc kubenswrapper[5049]: I0127 17:14:40.997038 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687"] Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.007545 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r"] Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.009100 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5h5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-26ckr_openstack-operators(89c673d2-c27b-4b39-bb48-b463d5626491): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.009233 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g45wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-849fcfbb6b-tdbpz_openstack-operators(d5477f31-e31c-47a2-bbaf-543196a1908e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.012526 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" podUID="89c673d2-c27b-4b39-bb48-b463d5626491" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.012596 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz"] Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.012894 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" podUID="d5477f31-e31c-47a2-bbaf-543196a1908e" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.022509 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4479s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-ddcbfd695-24wdf_openstack-operators(89eae96a-ae74-441c-b4f4-6423b01e11c9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.022820 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5"] Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.022849 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk8jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-shfpw_openstack-operators(ac863301-b663-4e76-83af-5b1596a19d5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.022873 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rd8gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-45drh_openstack-operators(03f2c7d7-4a4d-479c-aace-8cb3f75f5a34): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.022996 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9q7mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-799bc87c89-dzmjg_openstack-operators(23d834fc-5840-49a4-aa49-5a84e8490e39): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.023589 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" podUID="89eae96a-ae74-441c-b4f4-6423b01e11c9" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.024271 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" podUID="23d834fc-5840-49a4-aa49-5a84e8490e39" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.024300 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" podUID="ac863301-b663-4e76-83af-5b1596a19d5a" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.024304 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" podUID="03f2c7d7-4a4d-479c-aace-8cb3f75f5a34" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.025368 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4g62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-b7gkd_openstack-operators(640dbea5-d940-4917-ba99-8b506007a8c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.028573 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" podUID="640dbea5-d940-4917-ba99-8b506007a8c8" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.028583 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2"] Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.035722 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg"] Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.042780 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh"] Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.052538 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr"] Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.078372 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd"] Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.080860 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.081047 5049 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.081119 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert podName:954304ce-c36e-4eec-989f-a56c4d63f97e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:43.081098398 +0000 UTC m=+1058.180071937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" (UID: "954304ce-c36e-4eec-989f-a56c4d63f97e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.084450 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw"] Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.100196 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf"] Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.168861 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s"] Jan 27 17:14:41 crc kubenswrapper[5049]: W0127 17:14:41.171897 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b239d42_1dea_4559_8cdd_8db8cb8addab.slice/crio-f7bfd2fa74529f45b1d97738c5849fd351de6881481da8055e179c5a91575a92 WatchSource:0}: Error finding container f7bfd2fa74529f45b1d97738c5849fd351de6881481da8055e179c5a91575a92: Status 404 returned error can't find the container with id f7bfd2fa74529f45b1d97738c5849fd351de6881481da8055e179c5a91575a92 Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.595803 5049 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.595878 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:43.595858334 +0000 UTC m=+1058.694831883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "webhook-server-cert" not found Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.595664 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.596288 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.596431 5049 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.596464 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:43.596452721 +0000 UTC m=+1058.695426270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "metrics-server-cert" not found Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.842288 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" event={"ID":"5821c221-c684-480a-a174-2154d785d9be","Type":"ContainerStarted","Data":"e667ba7f98a66c54c7ded19f2170e0f734e97d4deb40b3d37610468d61a2ed2a"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.845756 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" event={"ID":"63489758-4a49-40f2-8886-7c09cc103f40","Type":"ContainerStarted","Data":"52e06403d0aa7b3b98dd60eed99ed1f339258d27a247d388d8ffae68c2b01f4c"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.847629 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" event={"ID":"23d834fc-5840-49a4-aa49-5a84e8490e39","Type":"ContainerStarted","Data":"b18b9e0613c2e4886481d276abb7f877bc686829769fd89df0cb1f86478dd316"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.849208 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" event={"ID":"61d8b171-8120-44cb-a074-54ea2aea3735","Type":"ContainerStarted","Data":"1ebbf92354596778a079f0aed245ce7af6d5f7f230faaf1c71e451753de277dd"} Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.852592 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" podUID="23d834fc-5840-49a4-aa49-5a84e8490e39" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.852778 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" event={"ID":"713c4f3d-0a16-43b1-a9ba-52f2905863b7","Type":"ContainerStarted","Data":"915f3252952e581286e83df3885d6c2f3dcc76a3bf326b3522708d1c703b2f97"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.852849 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" event={"ID":"640dbea5-d940-4917-ba99-8b506007a8c8","Type":"ContainerStarted","Data":"ca383fd1064196ad6913e980bce594705bdd3776e3994a06706f67a3f6546753"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.854562 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" event={"ID":"03f2c7d7-4a4d-479c-aace-8cb3f75f5a34","Type":"ContainerStarted","Data":"63d9a511a705347d4cae985bd28ab18607838fe9646aa5f6d869a3c4da3a801d"} Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.855324 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" podUID="640dbea5-d940-4917-ba99-8b506007a8c8" Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.856014 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" podUID="03f2c7d7-4a4d-479c-aace-8cb3f75f5a34" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.856755 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" event={"ID":"596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82","Type":"ContainerStarted","Data":"3fca645d0cfb05c964416637379848c2b158d04bc959617aa5f8742c3c6b67c9"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.858626 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" event={"ID":"89eae96a-ae74-441c-b4f4-6423b01e11c9","Type":"ContainerStarted","Data":"f82c52ade9c0bad2644d31f21fd13316058be82b46f7652575bc85cf4fe5e372"} Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.860814 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" podUID="89eae96a-ae74-441c-b4f4-6423b01e11c9" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.861907 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" event={"ID":"42f93a5d-5678-4e1e-b5a0-1bd0017dab7c","Type":"ContainerStarted","Data":"208968a3319e2dd9365f3a9fd84d284b51efc576d0a14e3ebb3d82c584c609f2"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.864640 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" event={"ID":"1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf","Type":"ContainerStarted","Data":"3b8aea9fd764453e2c99a100a5e2f727918b2448b8e39affd88e53ce7769669e"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.872195 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" event={"ID":"f401ecb9-876a-4bed-9848-1ab332f71010","Type":"ContainerStarted","Data":"512a5b2a497dd1090dc3f80f3b6bc34cff5943c02916bbd4346cb603f709cc21"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.874347 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" event={"ID":"89c673d2-c27b-4b39-bb48-b463d5626491","Type":"ContainerStarted","Data":"4c6c65c938b121a6e78ee5f85bf519564aa2f6d137c8e0b2b0c5aa873c99a5dd"} Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.879535 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" podUID="89c673d2-c27b-4b39-bb48-b463d5626491" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.886963 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" event={"ID":"0b239d42-1dea-4559-8cdd-8db8cb8addab","Type":"ContainerStarted","Data":"f7bfd2fa74529f45b1d97738c5849fd351de6881481da8055e179c5a91575a92"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.895698 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" event={"ID":"1046a5c5-7064-4f3d-8a27-4c70edefff18","Type":"ContainerStarted","Data":"81709d3a281be4b4bc9f2acacfca055a09444087764347c2dec086b5a3d7b14d"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.901534 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" event={"ID":"b58745a3-9e9d-4337-8965-6caf2ade0bdd","Type":"ContainerStarted","Data":"47a780a6ba960715ee1d255adb52f695ab2e73e83f25050ab2f6cf88d7796a69"} Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.903626 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" event={"ID":"ac863301-b663-4e76-83af-5b1596a19d5a","Type":"ContainerStarted","Data":"34ead22539d57273022c18b12ae82394271fb337c7c1f03c6b918b22f5291703"} Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.905062 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" podUID="ac863301-b663-4e76-83af-5b1596a19d5a" Jan 27 17:14:41 crc kubenswrapper[5049]: I0127 17:14:41.905407 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" event={"ID":"d5477f31-e31c-47a2-bbaf-543196a1908e","Type":"ContainerStarted","Data":"291ce25f752eabf74c26ac6c233a2f8530a376c8131594317c83a00755800089"} Jan 27 17:14:41 crc kubenswrapper[5049]: E0127 17:14:41.908752 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" podUID="d5477f31-e31c-47a2-bbaf-543196a1908e" Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.936618 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" podUID="89c673d2-c27b-4b39-bb48-b463d5626491" Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.936955 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" podUID="ac863301-b663-4e76-83af-5b1596a19d5a" Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.937013 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:1f1fea3b7df89b81756eab8e6f4c9bed01ab7e949a6ce2d7692c260f41dfbc20\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" podUID="23d834fc-5840-49a4-aa49-5a84e8490e39" Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.937068 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" podUID="d5477f31-e31c-47a2-bbaf-543196a1908e" Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.937248 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" podUID="640dbea5-d940-4917-ba99-8b506007a8c8" Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.938583 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" podUID="03f2c7d7-4a4d-479c-aace-8cb3f75f5a34" Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.944300 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:a992613466db3478a00c20c28639c4a12f6326aa52c40a418d1ec40038c83b61\\\"\"" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" podUID="89eae96a-ae74-441c-b4f4-6423b01e11c9" Jan 27 17:14:42 crc kubenswrapper[5049]: I0127 17:14:42.953242 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.953442 5049 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:42 crc kubenswrapper[5049]: E0127 17:14:42.953486 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert podName:61d18054-36c7-4e08-a20d-7dd2bb853959 nodeName:}" failed. No retries permitted until 2026-01-27 17:14:46.953473161 +0000 UTC m=+1062.052446710 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert") pod "infra-operator-controller-manager-7d75bc88d5-hdd85" (UID: "61d18054-36c7-4e08-a20d-7dd2bb853959") : secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:43 crc kubenswrapper[5049]: I0127 17:14:43.158137 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:43 crc kubenswrapper[5049]: E0127 17:14:43.158266 5049 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:43 crc kubenswrapper[5049]: E0127 17:14:43.158316 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert podName:954304ce-c36e-4eec-989f-a56c4d63f97e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:47.158299524 +0000 UTC m=+1062.257273073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" (UID: "954304ce-c36e-4eec-989f-a56c4d63f97e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:43 crc kubenswrapper[5049]: I0127 17:14:43.668579 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:43 crc kubenswrapper[5049]: I0127 17:14:43.668663 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:43 crc kubenswrapper[5049]: E0127 17:14:43.668799 5049 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 17:14:43 crc kubenswrapper[5049]: E0127 17:14:43.668808 5049 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 17:14:43 crc kubenswrapper[5049]: E0127 17:14:43.668850 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:47.668836087 +0000 UTC m=+1062.767809636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "metrics-server-cert" not found Jan 27 17:14:43 crc kubenswrapper[5049]: E0127 17:14:43.668902 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:47.668876339 +0000 UTC m=+1062.767849968 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "webhook-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.031212 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:47 crc kubenswrapper[5049]: E0127 17:14:47.031404 5049 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: E0127 17:14:47.031757 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert podName:61d18054-36c7-4e08-a20d-7dd2bb853959 nodeName:}" failed. No retries permitted until 2026-01-27 17:14:55.031737869 +0000 UTC m=+1070.130711418 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert") pod "infra-operator-controller-manager-7d75bc88d5-hdd85" (UID: "61d18054-36c7-4e08-a20d-7dd2bb853959") : secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.234513 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:47 crc kubenswrapper[5049]: E0127 17:14:47.234765 5049 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: E0127 17:14:47.234848 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert podName:954304ce-c36e-4eec-989f-a56c4d63f97e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:55.234829111 +0000 UTC m=+1070.333802660 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" (UID: "954304ce-c36e-4eec-989f-a56c4d63f97e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.741157 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.741509 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:47 crc kubenswrapper[5049]: E0127 17:14:47.741393 5049 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: E0127 17:14:47.741639 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:55.741610247 +0000 UTC m=+1070.840583826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "metrics-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: E0127 17:14:47.741694 5049 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: E0127 17:14:47.741756 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs podName:d0a4b955-b7b5-45be-a997-7ed2d360218e nodeName:}" failed. No retries permitted until 2026-01-27 17:14:55.741739381 +0000 UTC m=+1070.840712930 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs") pod "openstack-operator-controller-manager-556b4c5b88-vhwl2" (UID: "d0a4b955-b7b5-45be-a997-7ed2d360218e") : secret "webhook-server-cert" not found Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.781478 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.781544 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.781594 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.782256 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ad01eb278d8a66889a11fa84f093b411a8a38e169a31c62b60f821c2f9f05b1"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:14:47 crc kubenswrapper[5049]: I0127 17:14:47.782320 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://6ad01eb278d8a66889a11fa84f093b411a8a38e169a31c62b60f821c2f9f05b1" gracePeriod=600 Jan 27 17:14:48 crc kubenswrapper[5049]: I0127 17:14:48.031775 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="6ad01eb278d8a66889a11fa84f093b411a8a38e169a31c62b60f821c2f9f05b1" exitCode=0 Jan 27 17:14:48 crc kubenswrapper[5049]: I0127 17:14:48.031826 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"6ad01eb278d8a66889a11fa84f093b411a8a38e169a31c62b60f821c2f9f05b1"} Jan 27 17:14:48 crc kubenswrapper[5049]: I0127 17:14:48.031865 5049 scope.go:117] "RemoveContainer" containerID="d1aa9223cd763227032c3196c83813fd302f48bd7085cca520f2fac4b65a3aa4" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.055561 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:14:55 crc kubenswrapper[5049]: E0127 17:14:55.055752 5049 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:55 crc kubenswrapper[5049]: E0127 17:14:55.056393 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert podName:61d18054-36c7-4e08-a20d-7dd2bb853959 nodeName:}" failed. No retries permitted until 2026-01-27 17:15:11.056371862 +0000 UTC m=+1086.155345411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert") pod "infra-operator-controller-manager-7d75bc88d5-hdd85" (UID: "61d18054-36c7-4e08-a20d-7dd2bb853959") : secret "infra-operator-webhook-server-cert" not found Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.092588 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" event={"ID":"61d8b171-8120-44cb-a074-54ea2aea3735","Type":"ContainerStarted","Data":"072ccad2e957734410f4d8dfd2bfa6f8e61f650374d39238cc3f6abcdd89e377"} Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.092704 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.095883 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" event={"ID":"596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82","Type":"ContainerStarted","Data":"c441faf95cce8a4cea134d69ae84c03edd4a3abae6ac28511f6d206b9581b893"} Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.096002 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.097513 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" event={"ID":"5821c221-c684-480a-a174-2154d785d9be","Type":"ContainerStarted","Data":"b0055ecf3cada746d305a825f2dec2b3af96c8e5fdd37bfca3a210712fe4f45a"} Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.097657 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.099685 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"690eb8dd99a38db0e2d128dc8fae0eb0e7ee256d3467527d01896edbadf9fc55"} Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.101702 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" event={"ID":"1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf","Type":"ContainerStarted","Data":"0b3dcefda51c268e3a100715dc1b0468f14b5d54f03e0db1df136a2a039d91c6"} Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.101826 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.103212 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" event={"ID":"63489758-4a49-40f2-8886-7c09cc103f40","Type":"ContainerStarted","Data":"4f47a86ad095ef2300cc375ced54fd945661db144d6f9992a1f704903039f8a6"} Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.104250 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.106266 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" event={"ID":"6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97","Type":"ContainerStarted","Data":"5863c05d71686d75f519996d50ac4403275a2e2383bc7a0a81df4511e27edb8a"} Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.106453 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.121913 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" podStartSLOduration=3.47531871 podStartE2EDuration="17.121892831s" podCreationTimestamp="2026-01-27 17:14:38 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.944877492 +0000 UTC m=+1056.043851041" lastFinishedPulling="2026-01-27 17:14:54.591451613 +0000 UTC m=+1069.690425162" observedRunningTime="2026-01-27 17:14:55.118447331 +0000 UTC m=+1070.217420890" watchObservedRunningTime="2026-01-27 17:14:55.121892831 +0000 UTC m=+1070.220866380" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.169710 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" podStartSLOduration=3.602400062 podStartE2EDuration="17.169658997s" podCreationTimestamp="2026-01-27 17:14:38 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.991716912 +0000 UTC m=+1056.090690461" lastFinishedPulling="2026-01-27 17:14:54.558975847 +0000 UTC m=+1069.657949396" observedRunningTime="2026-01-27 17:14:55.166260769 +0000 UTC m=+1070.265234308" watchObservedRunningTime="2026-01-27 17:14:55.169658997 +0000 UTC m=+1070.268632546" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.195914 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" podStartSLOduration=2.5557964589999997 podStartE2EDuration="16.195900514s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.950824993 +0000 UTC m=+1056.049798542" lastFinishedPulling="2026-01-27 17:14:54.590929048 +0000 UTC m=+1069.689902597" observedRunningTime="2026-01-27 17:14:55.192574908 +0000 UTC m=+1070.291548457" watchObservedRunningTime="2026-01-27 17:14:55.195900514 +0000 UTC m=+1070.294874053" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.218944 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" podStartSLOduration=2.65230382 podStartE2EDuration="16.218924877s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.991406743 +0000 UTC m=+1056.090380292" lastFinishedPulling="2026-01-27 17:14:54.5580278 +0000 UTC m=+1069.657001349" observedRunningTime="2026-01-27 17:14:55.217795915 +0000 UTC m=+1070.316769464" watchObservedRunningTime="2026-01-27 17:14:55.218924877 +0000 UTC m=+1070.317898426" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.258882 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.259161 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" podStartSLOduration=2.675930281 podStartE2EDuration="16.259140976s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.00797051 +0000 UTC m=+1056.106944059" lastFinishedPulling="2026-01-27 17:14:54.591181205 +0000 UTC m=+1069.690154754" observedRunningTime="2026-01-27 17:14:55.257390636 +0000 UTC m=+1070.356364185" watchObservedRunningTime="2026-01-27 17:14:55.259140976 +0000 UTC m=+1070.358114525" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.288364 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/954304ce-c36e-4eec-989f-a56c4d63f97e-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt\" (UID: \"954304ce-c36e-4eec-989f-a56c4d63f97e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.345030 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.352551 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" podStartSLOduration=3.076835045 podStartE2EDuration="17.352531778s" podCreationTimestamp="2026-01-27 17:14:38 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.265956985 +0000 UTC m=+1055.364930534" lastFinishedPulling="2026-01-27 17:14:54.541653718 +0000 UTC m=+1069.640627267" observedRunningTime="2026-01-27 17:14:55.325088637 +0000 UTC m=+1070.424062186" watchObservedRunningTime="2026-01-27 17:14:55.352531778 +0000 UTC m=+1070.451505327" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.807875 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.808197 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.815320 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-metrics-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:55 crc kubenswrapper[5049]: I0127 17:14:55.822263 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0a4b955-b7b5-45be-a997-7ed2d360218e-webhook-certs\") pod \"openstack-operator-controller-manager-556b4c5b88-vhwl2\" (UID: \"d0a4b955-b7b5-45be-a997-7ed2d360218e\") " pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.002591 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt"] Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.072951 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.124481 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" event={"ID":"954304ce-c36e-4eec-989f-a56c4d63f97e","Type":"ContainerStarted","Data":"e98d55d987bc8cebab6fbaaee26c6b8a9a1a183dad0c32d58df23b54107f1895"} Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.126408 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" event={"ID":"713c4f3d-0a16-43b1-a9ba-52f2905863b7","Type":"ContainerStarted","Data":"9c68ca1ef7f1c68915d7ff26d3a47000ec2280c222f09e9c0eb3bb7247ddd4aa"} Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.127464 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.133352 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" event={"ID":"0b239d42-1dea-4559-8cdd-8db8cb8addab","Type":"ContainerStarted","Data":"808ae556e4dc63b9bc4b895a285ef8061a236642fe422d4274c8275ad6760877"} Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.133787 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.148624 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" event={"ID":"1046a5c5-7064-4f3d-8a27-4c70edefff18","Type":"ContainerStarted","Data":"7fdc14303413dcb14bce4a5c17c5ef894d75a33c456a9afb456184a1e00a7872"} Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.149335 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.165438 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" podStartSLOduration=4.618303711 podStartE2EDuration="18.165424186s" podCreationTimestamp="2026-01-27 17:14:38 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.995015647 +0000 UTC m=+1056.093989196" lastFinishedPulling="2026-01-27 17:14:54.542136122 +0000 UTC m=+1069.641109671" observedRunningTime="2026-01-27 17:14:56.155909282 +0000 UTC m=+1071.254882831" watchObservedRunningTime="2026-01-27 17:14:56.165424186 +0000 UTC m=+1071.264397735" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.168492 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" event={"ID":"cf5055fa-99ac-4063-b192-5743f331b01a","Type":"ContainerStarted","Data":"bcfe3664463d25cde814f71c6de52a383352a74ad57ed929d287dac7ee788c79"} Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.169074 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.187019 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" podStartSLOduration=4.570529144 podStartE2EDuration="18.187006288s" podCreationTimestamp="2026-01-27 17:14:38 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.923536477 +0000 UTC m=+1056.022510036" lastFinishedPulling="2026-01-27 17:14:54.540013631 +0000 UTC m=+1069.638987180" observedRunningTime="2026-01-27 17:14:56.186013519 +0000 UTC m=+1071.284987078" watchObservedRunningTime="2026-01-27 17:14:56.187006288 +0000 UTC m=+1071.285979837" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.196655 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" event={"ID":"42f93a5d-5678-4e1e-b5a0-1bd0017dab7c","Type":"ContainerStarted","Data":"aab2c4c89dfb6fa6c3a9ec96d7e665885fa059090d57fddc0982d173eecf6c6c"} Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.197294 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.227875 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" event={"ID":"f401ecb9-876a-4bed-9848-1ab332f71010","Type":"ContainerStarted","Data":"d448bc34c64af592bc06eeae575af8cc0eb623b2b739bec11b04ccabfb0af50d"} Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.227947 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.230567 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" event={"ID":"b58745a3-9e9d-4337-8965-6caf2ade0bdd","Type":"ContainerStarted","Data":"e13c12b439fc4e6a4a66c4c6a5dd8a919e7f908c8a9b7b9a6a2b2f01335c8c3a"} Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.230605 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.250780 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" podStartSLOduration=3.865443494 podStartE2EDuration="17.250764696s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.173900662 +0000 UTC m=+1056.272874211" lastFinishedPulling="2026-01-27 17:14:54.559221864 +0000 UTC m=+1069.658195413" observedRunningTime="2026-01-27 17:14:56.221837752 +0000 UTC m=+1071.320811301" watchObservedRunningTime="2026-01-27 17:14:56.250764696 +0000 UTC m=+1071.349738245" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.253077 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" podStartSLOduration=4.163007909 podStartE2EDuration="18.253069652s" podCreationTimestamp="2026-01-27 17:14:38 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.467559255 +0000 UTC m=+1055.566532804" lastFinishedPulling="2026-01-27 17:14:54.557620998 +0000 UTC m=+1069.656594547" observedRunningTime="2026-01-27 17:14:56.248807359 +0000 UTC m=+1071.347780908" watchObservedRunningTime="2026-01-27 17:14:56.253069652 +0000 UTC m=+1071.352043201" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.272815 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" podStartSLOduration=3.6647276890000002 podStartE2EDuration="17.27278756s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.950823723 +0000 UTC m=+1056.049797272" lastFinishedPulling="2026-01-27 17:14:54.558883594 +0000 UTC m=+1069.657857143" observedRunningTime="2026-01-27 17:14:56.267709994 +0000 UTC m=+1071.366683543" watchObservedRunningTime="2026-01-27 17:14:56.27278756 +0000 UTC m=+1071.371761119" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.318869 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" podStartSLOduration=3.738387622 podStartE2EDuration="17.318857838s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.006684303 +0000 UTC m=+1056.105657852" lastFinishedPulling="2026-01-27 17:14:54.587154529 +0000 UTC m=+1069.686128068" observedRunningTime="2026-01-27 17:14:56.3168246 +0000 UTC m=+1071.415798149" watchObservedRunningTime="2026-01-27 17:14:56.318857838 +0000 UTC m=+1071.417831387" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.358196 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" podStartSLOduration=3.717710366 podStartE2EDuration="17.358178931s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:40.9909683 +0000 UTC m=+1056.089941859" lastFinishedPulling="2026-01-27 17:14:54.631436875 +0000 UTC m=+1069.730410424" observedRunningTime="2026-01-27 17:14:56.347871094 +0000 UTC m=+1071.446844633" watchObservedRunningTime="2026-01-27 17:14:56.358178931 +0000 UTC m=+1071.457152480" Jan 27 17:14:56 crc kubenswrapper[5049]: I0127 17:14:56.570619 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2"] Jan 27 17:14:57 crc kubenswrapper[5049]: I0127 17:14:57.237980 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" event={"ID":"d0a4b955-b7b5-45be-a997-7ed2d360218e","Type":"ContainerStarted","Data":"2dc19b80b6e9674a41d98db56b4398d759546d00c26deac95d0527556e6dd200"} Jan 27 17:14:57 crc kubenswrapper[5049]: I0127 17:14:57.238253 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" event={"ID":"d0a4b955-b7b5-45be-a997-7ed2d360218e","Type":"ContainerStarted","Data":"947d15ec80171c33d19614f32194fbd9fad7801c5e66e8f192b26eb8315a6e7b"} Jan 27 17:14:57 crc kubenswrapper[5049]: I0127 17:14:57.239788 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:14:57 crc kubenswrapper[5049]: I0127 17:14:57.282236 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" podStartSLOduration=18.282218903 podStartE2EDuration="18.282218903s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:14:57.274707487 +0000 UTC m=+1072.373681036" watchObservedRunningTime="2026-01-27 17:14:57.282218903 +0000 UTC m=+1072.381192452" Jan 27 17:14:59 crc kubenswrapper[5049]: I0127 17:14:59.297380 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-575ffb885b-jbx77" Jan 27 17:14:59 crc kubenswrapper[5049]: I0127 17:14:59.848303 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qrvc5" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.143209 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t"] Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.145929 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.149995 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.150409 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.158293 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t"] Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.188466 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c9bb4b66c-rxn7s" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.294931 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5bt\" (UniqueName: \"kubernetes.io/projected/5fc45544-83c9-47c9-b26b-0e6cbeffc816-kube-api-access-2q5bt\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.295075 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc45544-83c9-47c9-b26b-0e6cbeffc816-secret-volume\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.295100 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc45544-83c9-47c9-b26b-0e6cbeffc816-config-volume\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.396077 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5bt\" (UniqueName: \"kubernetes.io/projected/5fc45544-83c9-47c9-b26b-0e6cbeffc816-kube-api-access-2q5bt\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.396189 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc45544-83c9-47c9-b26b-0e6cbeffc816-config-volume\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.396211 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc45544-83c9-47c9-b26b-0e6cbeffc816-secret-volume\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.397636 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc45544-83c9-47c9-b26b-0e6cbeffc816-config-volume\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.402080 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc45544-83c9-47c9-b26b-0e6cbeffc816-secret-volume\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.417575 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5bt\" (UniqueName: \"kubernetes.io/projected/5fc45544-83c9-47c9-b26b-0e6cbeffc816-kube-api-access-2q5bt\") pod \"collect-profiles-29492235-mww8t\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:00 crc kubenswrapper[5049]: I0127 17:15:00.481418 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.235244 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t"] Jan 27 17:15:05 crc kubenswrapper[5049]: W0127 17:15:05.236318 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fc45544_83c9_47c9_b26b_0e6cbeffc816.slice/crio-7b5916c3c010f8953d234d7343f2cd7a91089608eb176932f7a9990e9b48350b WatchSource:0}: Error finding container 7b5916c3c010f8953d234d7343f2cd7a91089608eb176932f7a9990e9b48350b: Status 404 returned error can't find the container with id 7b5916c3c010f8953d234d7343f2cd7a91089608eb176932f7a9990e9b48350b Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.318087 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" event={"ID":"d5477f31-e31c-47a2-bbaf-543196a1908e","Type":"ContainerStarted","Data":"8cb7b64027f33c9f62caf0a84cf5be85b7e398d8c615bcf574fe45dac2b738ca"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.318289 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.320222 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" event={"ID":"89eae96a-ae74-441c-b4f4-6423b01e11c9","Type":"ContainerStarted","Data":"d976d1eb9737febb117db49a23403220a7861f7ab5e84356f73cac067265be94"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.320720 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.321858 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" event={"ID":"640dbea5-d940-4917-ba99-8b506007a8c8","Type":"ContainerStarted","Data":"18912bc48449e14d9268d5741dece51f3dcf269eb9b6d596ce2bcf95119245eb"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.321985 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.323547 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" event={"ID":"23d834fc-5840-49a4-aa49-5a84e8490e39","Type":"ContainerStarted","Data":"bbe47b007843d73c006de8ec93cf0cfc9fdc8e263d9cf46e9d7d8da4756cc2ad"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.323788 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.324680 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" event={"ID":"03f2c7d7-4a4d-479c-aace-8cb3f75f5a34","Type":"ContainerStarted","Data":"b749fc41a14a680abcdf3e7dde832042c250923478ec713dacbac2b1b06c686f"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.324838 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.326214 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" event={"ID":"5fc45544-83c9-47c9-b26b-0e6cbeffc816","Type":"ContainerStarted","Data":"7b5916c3c010f8953d234d7343f2cd7a91089608eb176932f7a9990e9b48350b"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.327683 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" event={"ID":"ac863301-b663-4e76-83af-5b1596a19d5a","Type":"ContainerStarted","Data":"56fc28c8c3097e3ac641fb77c4dc6c51f54cb9a539a513ffab9c89dad7bd6b0f"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.327881 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.329427 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" event={"ID":"954304ce-c36e-4eec-989f-a56c4d63f97e","Type":"ContainerStarted","Data":"7cfc0589d0c2e40f4f1b20ff9a7cca6f3d107ee13d0cd835a7bcc163dfd8f814"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.329541 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.330831 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" event={"ID":"89c673d2-c27b-4b39-bb48-b463d5626491","Type":"ContainerStarted","Data":"693b328d915473130103e25391e4473bbf42646d4595d0ff42737d5c896d83b8"} Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.338549 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" podStartSLOduration=2.511622625 podStartE2EDuration="26.338526359s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.009105453 +0000 UTC m=+1056.108079012" lastFinishedPulling="2026-01-27 17:15:04.836009197 +0000 UTC m=+1079.934982746" observedRunningTime="2026-01-27 17:15:05.333646049 +0000 UTC m=+1080.432619598" watchObservedRunningTime="2026-01-27 17:15:05.338526359 +0000 UTC m=+1080.437499908" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.347746 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-26ckr" podStartSLOduration=2.498121716 podStartE2EDuration="26.347731415s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.008975399 +0000 UTC m=+1056.107948948" lastFinishedPulling="2026-01-27 17:15:04.858585098 +0000 UTC m=+1079.957558647" observedRunningTime="2026-01-27 17:15:05.346727176 +0000 UTC m=+1080.445700735" watchObservedRunningTime="2026-01-27 17:15:05.347731415 +0000 UTC m=+1080.446704964" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.384111 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" podStartSLOduration=17.596799477 podStartE2EDuration="26.384088772s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:56.048809935 +0000 UTC m=+1071.147783484" lastFinishedPulling="2026-01-27 17:15:04.83609921 +0000 UTC m=+1079.935072779" observedRunningTime="2026-01-27 17:15:05.379931303 +0000 UTC m=+1080.478904852" watchObservedRunningTime="2026-01-27 17:15:05.384088772 +0000 UTC m=+1080.483062321" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.438433 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" podStartSLOduration=2.623309233 podStartE2EDuration="26.438415008s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.022860779 +0000 UTC m=+1056.121834328" lastFinishedPulling="2026-01-27 17:15:04.837966554 +0000 UTC m=+1079.936940103" observedRunningTime="2026-01-27 17:15:05.433598999 +0000 UTC m=+1080.532572548" watchObservedRunningTime="2026-01-27 17:15:05.438415008 +0000 UTC m=+1080.537388557" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.482024 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" podStartSLOduration=2.689889323 podStartE2EDuration="26.482006014s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.022731156 +0000 UTC m=+1056.121704705" lastFinishedPulling="2026-01-27 17:15:04.814847847 +0000 UTC m=+1079.913821396" observedRunningTime="2026-01-27 17:15:05.481953043 +0000 UTC m=+1080.580926592" watchObservedRunningTime="2026-01-27 17:15:05.482006014 +0000 UTC m=+1080.580979563" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.519151 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" podStartSLOduration=2.731964114 podStartE2EDuration="26.519132844s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.022760286 +0000 UTC m=+1056.121733835" lastFinishedPulling="2026-01-27 17:15:04.809929016 +0000 UTC m=+1079.908902565" observedRunningTime="2026-01-27 17:15:05.518566758 +0000 UTC m=+1080.617540307" watchObservedRunningTime="2026-01-27 17:15:05.519132844 +0000 UTC m=+1080.618106393" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.558433 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" podStartSLOduration=2.721815173 podStartE2EDuration="26.558412056s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.022355275 +0000 UTC m=+1056.121328824" lastFinishedPulling="2026-01-27 17:15:04.858952158 +0000 UTC m=+1079.957925707" observedRunningTime="2026-01-27 17:15:05.556413799 +0000 UTC m=+1080.655387338" watchObservedRunningTime="2026-01-27 17:15:05.558412056 +0000 UTC m=+1080.657385605" Jan 27 17:15:05 crc kubenswrapper[5049]: I0127 17:15:05.691326 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" podStartSLOduration=2.887822258 podStartE2EDuration="26.691307847s" podCreationTimestamp="2026-01-27 17:14:39 +0000 UTC" firstStartedPulling="2026-01-27 17:14:41.025271229 +0000 UTC m=+1056.124244858" lastFinishedPulling="2026-01-27 17:15:04.828756908 +0000 UTC m=+1079.927730447" observedRunningTime="2026-01-27 17:15:05.59636902 +0000 UTC m=+1080.695342569" watchObservedRunningTime="2026-01-27 17:15:05.691307847 +0000 UTC m=+1080.790281396" Jan 27 17:15:06 crc kubenswrapper[5049]: I0127 17:15:06.081745 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-556b4c5b88-vhwl2" Jan 27 17:15:06 crc kubenswrapper[5049]: I0127 17:15:06.343489 5049 generic.go:334] "Generic (PLEG): container finished" podID="5fc45544-83c9-47c9-b26b-0e6cbeffc816" containerID="868790039cac7fb648d826c8f3b581bb9055b9eab643e750264daddedf3227a5" exitCode=0 Jan 27 17:15:06 crc kubenswrapper[5049]: I0127 17:15:06.343573 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" event={"ID":"5fc45544-83c9-47c9-b26b-0e6cbeffc816","Type":"ContainerDied","Data":"868790039cac7fb648d826c8f3b581bb9055b9eab643e750264daddedf3227a5"} Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.609652 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.633916 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q5bt\" (UniqueName: \"kubernetes.io/projected/5fc45544-83c9-47c9-b26b-0e6cbeffc816-kube-api-access-2q5bt\") pod \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.634002 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc45544-83c9-47c9-b26b-0e6cbeffc816-config-volume\") pod \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.634105 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc45544-83c9-47c9-b26b-0e6cbeffc816-secret-volume\") pod \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\" (UID: \"5fc45544-83c9-47c9-b26b-0e6cbeffc816\") " Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.637409 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fc45544-83c9-47c9-b26b-0e6cbeffc816-config-volume" (OuterVolumeSpecName: "config-volume") pod "5fc45544-83c9-47c9-b26b-0e6cbeffc816" (UID: "5fc45544-83c9-47c9-b26b-0e6cbeffc816"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.639902 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc45544-83c9-47c9-b26b-0e6cbeffc816-kube-api-access-2q5bt" (OuterVolumeSpecName: "kube-api-access-2q5bt") pod "5fc45544-83c9-47c9-b26b-0e6cbeffc816" (UID: "5fc45544-83c9-47c9-b26b-0e6cbeffc816"). InnerVolumeSpecName "kube-api-access-2q5bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.639941 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc45544-83c9-47c9-b26b-0e6cbeffc816-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5fc45544-83c9-47c9-b26b-0e6cbeffc816" (UID: "5fc45544-83c9-47c9-b26b-0e6cbeffc816"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.737105 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fc45544-83c9-47c9-b26b-0e6cbeffc816-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.737497 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5fc45544-83c9-47c9-b26b-0e6cbeffc816-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:07 crc kubenswrapper[5049]: I0127 17:15:07.745267 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q5bt\" (UniqueName: \"kubernetes.io/projected/5fc45544-83c9-47c9-b26b-0e6cbeffc816-kube-api-access-2q5bt\") on node \"crc\" DevicePath \"\"" Jan 27 17:15:08 crc kubenswrapper[5049]: I0127 17:15:08.360629 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" event={"ID":"5fc45544-83c9-47c9-b26b-0e6cbeffc816","Type":"ContainerDied","Data":"7b5916c3c010f8953d234d7343f2cd7a91089608eb176932f7a9990e9b48350b"} Jan 27 17:15:08 crc kubenswrapper[5049]: I0127 17:15:08.360718 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b5916c3c010f8953d234d7343f2cd7a91089608eb176932f7a9990e9b48350b" Jan 27 17:15:08 crc kubenswrapper[5049]: I0127 17:15:08.360735 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.227860 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-8d8dj" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.253195 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-6fxd4" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.288559 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-pb687" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.289346 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-6gzdm" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.392271 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-pqbc2" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.408150 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-qjftl" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.640465 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-hzzj2" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.689097 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-zxh24" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.761032 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-cz6ks" Jan 27 17:15:09 crc kubenswrapper[5049]: I0127 17:15:09.868481 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-jmr5r" Jan 27 17:15:11 crc kubenswrapper[5049]: I0127 17:15:11.099418 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:15:11 crc kubenswrapper[5049]: I0127 17:15:11.107614 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/61d18054-36c7-4e08-a20d-7dd2bb853959-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-hdd85\" (UID: \"61d18054-36c7-4e08-a20d-7dd2bb853959\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:15:11 crc kubenswrapper[5049]: I0127 17:15:11.179221 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2wmz7" Jan 27 17:15:11 crc kubenswrapper[5049]: I0127 17:15:11.187736 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:15:11 crc kubenswrapper[5049]: I0127 17:15:11.441131 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85"] Jan 27 17:15:12 crc kubenswrapper[5049]: I0127 17:15:12.392523 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" event={"ID":"61d18054-36c7-4e08-a20d-7dd2bb853959","Type":"ContainerStarted","Data":"289f0daa170068a599c8a9f899aa8eec541fe29f05132e4459b6d96b1b0ff89d"} Jan 27 17:15:15 crc kubenswrapper[5049]: I0127 17:15:15.352942 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt" Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.416038 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-tdbpz" Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.447567 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" event={"ID":"61d18054-36c7-4e08-a20d-7dd2bb853959","Type":"ContainerStarted","Data":"f9edf3b348b396b2ca051126165f2608f9a35bdf286d08af0abcaba47d4d1297"} Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.447703 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.496346 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" podStartSLOduration=33.71462332 podStartE2EDuration="41.496324642s" podCreationTimestamp="2026-01-27 17:14:38 +0000 UTC" firstStartedPulling="2026-01-27 17:15:11.457421378 +0000 UTC m=+1086.556394927" lastFinishedPulling="2026-01-27 17:15:19.2391227 +0000 UTC m=+1094.338096249" observedRunningTime="2026-01-27 17:15:19.493177641 +0000 UTC m=+1094.592151210" watchObservedRunningTime="2026-01-27 17:15:19.496324642 +0000 UTC m=+1094.595298211" Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.530509 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-45drh" Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.611688 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-shfpw" Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.663176 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-ddcbfd695-24wdf" Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.809373 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-b7gkd" Jan 27 17:15:19 crc kubenswrapper[5049]: I0127 17:15:19.950261 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dzmjg" Jan 27 17:15:31 crc kubenswrapper[5049]: I0127 17:15:31.195957 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-hdd85" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.410423 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-b5c6q"] Jan 27 17:15:47 crc kubenswrapper[5049]: E0127 17:15:47.411528 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc45544-83c9-47c9-b26b-0e6cbeffc816" containerName="collect-profiles" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.411551 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc45544-83c9-47c9-b26b-0e6cbeffc816" containerName="collect-profiles" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.411809 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc45544-83c9-47c9-b26b-0e6cbeffc816" containerName="collect-profiles" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.416856 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.420857 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.421169 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.421177 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-b5c6q"] Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.421346 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-62mj6" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.421572 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.430324 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.487059 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-config\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.487121 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smfwj\" (UniqueName: \"kubernetes.io/projected/38791ef9-a379-4760-9320-403bf57b0199-kube-api-access-smfwj\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.487327 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.588080 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-config\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.588132 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smfwj\" (UniqueName: \"kubernetes.io/projected/38791ef9-a379-4760-9320-403bf57b0199-kube-api-access-smfwj\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.588206 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.589135 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-config\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.589210 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.608002 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smfwj\" (UniqueName: \"kubernetes.io/projected/38791ef9-a379-4760-9320-403bf57b0199-kube-api-access-smfwj\") pod \"dnsmasq-dns-78dd6ddcc-b5c6q\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:47 crc kubenswrapper[5049]: I0127 17:15:47.742176 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:15:48 crc kubenswrapper[5049]: I0127 17:15:48.171431 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-b5c6q"] Jan 27 17:15:48 crc kubenswrapper[5049]: I0127 17:15:48.180964 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:15:48 crc kubenswrapper[5049]: I0127 17:15:48.691594 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" event={"ID":"38791ef9-a379-4760-9320-403bf57b0199","Type":"ContainerStarted","Data":"119a7b654e128e42ad0b1e6f2b10be091c0eddf6916a7b6ff42621870e25dc53"} Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.255034 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4z9fb"] Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.257397 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.268815 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4z9fb"] Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.344998 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-config\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.345085 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxrdd\" (UniqueName: \"kubernetes.io/projected/f377b5aa-3901-48ee-a680-81d820af1d56-kube-api-access-wxrdd\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.345117 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.452017 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-config\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.452446 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxrdd\" (UniqueName: \"kubernetes.io/projected/f377b5aa-3901-48ee-a680-81d820af1d56-kube-api-access-wxrdd\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.452488 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.453173 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-config\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.453535 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.474135 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxrdd\" (UniqueName: \"kubernetes.io/projected/f377b5aa-3901-48ee-a680-81d820af1d56-kube-api-access-wxrdd\") pod \"dnsmasq-dns-5ccc8479f9-4z9fb\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.538786 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-b5c6q"] Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.561446 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-f9s5t"] Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.562925 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.582496 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.588819 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-f9s5t"] Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.655348 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpphx\" (UniqueName: \"kubernetes.io/projected/c3319e0c-70bb-4932-8408-e633569624c8-kube-api-access-mpphx\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.655414 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.655478 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-config\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.756327 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-config\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.756417 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpphx\" (UniqueName: \"kubernetes.io/projected/c3319e0c-70bb-4932-8408-e633569624c8-kube-api-access-mpphx\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.756487 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.761012 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-config\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.761304 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.781955 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpphx\" (UniqueName: \"kubernetes.io/projected/c3319e0c-70bb-4932-8408-e633569624c8-kube-api-access-mpphx\") pod \"dnsmasq-dns-57d769cc4f-f9s5t\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:50 crc kubenswrapper[5049]: I0127 17:15:50.887117 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.084177 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4z9fb"] Jan 27 17:15:51 crc kubenswrapper[5049]: W0127 17:15:51.100152 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf377b5aa_3901_48ee_a680_81d820af1d56.slice/crio-4783fb201ae7114a23a0258ffd85aacd7be9b7646004d0f302680596e156fd56 WatchSource:0}: Error finding container 4783fb201ae7114a23a0258ffd85aacd7be9b7646004d0f302680596e156fd56: Status 404 returned error can't find the container with id 4783fb201ae7114a23a0258ffd85aacd7be9b7646004d0f302680596e156fd56 Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.360261 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-f9s5t"] Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.420815 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.423100 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.425408 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vkgm8" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.425932 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.425969 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.434964 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.436375 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.437015 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.445217 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.445836 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.465813 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qrt\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-kube-api-access-n8qrt\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.465878 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466008 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466052 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466094 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbb24b4b-dfbd-431f-8244-098c40f7c24f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466201 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466236 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466269 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466288 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbb24b4b-dfbd-431f-8244-098c40f7c24f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466356 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.466376 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567371 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbb24b4b-dfbd-431f-8244-098c40f7c24f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567422 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567454 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567474 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567494 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbb24b4b-dfbd-431f-8244-098c40f7c24f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567516 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567537 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567573 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8qrt\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-kube-api-access-n8qrt\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567597 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567618 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.567632 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.568617 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.568800 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.569046 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.569393 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.569577 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.569641 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.572569 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.575565 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbb24b4b-dfbd-431f-8244-098c40f7c24f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.577787 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.586986 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbb24b4b-dfbd-431f-8244-098c40f7c24f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.589640 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.592145 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8qrt\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-kube-api-access-n8qrt\") pod \"rabbitmq-cell1-server-0\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.690757 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.693152 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.696964 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ntd72" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.697199 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.697389 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.697555 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.698411 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.698547 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.699238 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.714644 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.731551 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" event={"ID":"c3319e0c-70bb-4932-8408-e633569624c8","Type":"ContainerStarted","Data":"3098fc1c7457786b42e3d87a38065ed4ded34db6726f0dce844453b25bc52072"} Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.733582 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" event={"ID":"f377b5aa-3901-48ee-a680-81d820af1d56","Type":"ContainerStarted","Data":"4783fb201ae7114a23a0258ffd85aacd7be9b7646004d0f302680596e156fd56"} Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.755549 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.871736 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.871809 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.871833 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-server-conf\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.871875 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.871900 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-pod-info\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.872052 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.872093 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.872148 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.873909 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.873961 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cm74\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-kube-api-access-4cm74\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.874004 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975477 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975781 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-server-conf\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975810 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975833 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-pod-info\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975861 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975887 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975918 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975938 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975953 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cm74\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-kube-api-access-4cm74\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975971 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.975998 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.976045 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.976129 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.976270 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.977371 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.977965 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-server-conf\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.978908 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.985423 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.985429 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-pod-info\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.985428 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.985787 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.995454 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cm74\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-kube-api-access-4cm74\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:51 crc kubenswrapper[5049]: I0127 17:15:51.999849 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " pod="openstack/rabbitmq-server-0" Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.020894 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.941516 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.943066 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.950246 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.950720 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-fqv7h" Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.952112 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.952292 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.953440 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 17:15:52 crc kubenswrapper[5049]: I0127 17:15:52.959456 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.089873 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.089935 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-operator-scripts\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.090034 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-kolla-config\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.090116 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.090156 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ls6\" (UniqueName: \"kubernetes.io/projected/de39a65a-7265-4418-a94b-f8f8f30c3807-kube-api-access-24ls6\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.090185 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-generated\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.090213 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.090237 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-default\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191524 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-kolla-config\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191602 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191630 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24ls6\" (UniqueName: \"kubernetes.io/projected/de39a65a-7265-4418-a94b-f8f8f30c3807-kube-api-access-24ls6\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191655 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-generated\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191696 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191717 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-default\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191732 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191740 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.191784 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-operator-scripts\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.192219 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-generated\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.192391 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-kolla-config\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.193384 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-operator-scripts\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.194437 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-default\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.195596 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.197197 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.207939 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24ls6\" (UniqueName: \"kubernetes.io/projected/de39a65a-7265-4418-a94b-f8f8f30c3807-kube-api-access-24ls6\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.210960 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " pod="openstack/openstack-galera-0" Jan 27 17:15:53 crc kubenswrapper[5049]: I0127 17:15:53.266388 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.355190 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.359224 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.362443 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.362488 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.362722 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-djmgl" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.363011 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.371208 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.511978 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.512823 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.514987 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7qhzk" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.515860 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.516646 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.516689 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.516714 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fslt2\" (UniqueName: \"kubernetes.io/projected/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kube-api-access-fslt2\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.516744 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.516758 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.516779 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.516797 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.516829 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.517575 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.525911 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.619895 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.619993 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95jk9\" (UniqueName: \"kubernetes.io/projected/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kube-api-access-95jk9\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620129 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kolla-config\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620177 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620229 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620248 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fslt2\" (UniqueName: \"kubernetes.io/projected/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kube-api-access-fslt2\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620293 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-config-data\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620322 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620338 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620388 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620409 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620426 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.620472 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.621596 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.623608 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.623889 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.624574 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.625752 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.628392 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.635312 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.638025 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fslt2\" (UniqueName: \"kubernetes.io/projected/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kube-api-access-fslt2\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.644005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.724294 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kolla-config\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.724373 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-config-data\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.724432 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.724451 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.724508 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95jk9\" (UniqueName: \"kubernetes.io/projected/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kube-api-access-95jk9\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.725285 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-config-data\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.725306 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kolla-config\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.729341 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.730384 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.739248 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.741410 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95jk9\" (UniqueName: \"kubernetes.io/projected/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kube-api-access-95jk9\") pod \"memcached-0\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " pod="openstack/memcached-0" Jan 27 17:15:54 crc kubenswrapper[5049]: I0127 17:15:54.825473 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 17:15:56 crc kubenswrapper[5049]: I0127 17:15:56.221017 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:15:56 crc kubenswrapper[5049]: I0127 17:15:56.222282 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 17:15:56 crc kubenswrapper[5049]: I0127 17:15:56.223967 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mbcsn" Jan 27 17:15:56 crc kubenswrapper[5049]: I0127 17:15:56.232260 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:15:56 crc kubenswrapper[5049]: I0127 17:15:56.352076 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz49k\" (UniqueName: \"kubernetes.io/projected/b4962a09-aea5-455e-8620-b83f9ede60e5-kube-api-access-dz49k\") pod \"kube-state-metrics-0\" (UID: \"b4962a09-aea5-455e-8620-b83f9ede60e5\") " pod="openstack/kube-state-metrics-0" Jan 27 17:15:56 crc kubenswrapper[5049]: I0127 17:15:56.453477 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz49k\" (UniqueName: \"kubernetes.io/projected/b4962a09-aea5-455e-8620-b83f9ede60e5-kube-api-access-dz49k\") pod \"kube-state-metrics-0\" (UID: \"b4962a09-aea5-455e-8620-b83f9ede60e5\") " pod="openstack/kube-state-metrics-0" Jan 27 17:15:56 crc kubenswrapper[5049]: I0127 17:15:56.495635 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz49k\" (UniqueName: \"kubernetes.io/projected/b4962a09-aea5-455e-8620-b83f9ede60e5-kube-api-access-dz49k\") pod \"kube-state-metrics-0\" (UID: \"b4962a09-aea5-455e-8620-b83f9ede60e5\") " pod="openstack/kube-state-metrics-0" Jan 27 17:15:56 crc kubenswrapper[5049]: I0127 17:15:56.551286 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.781604 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pv2qx"] Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.782864 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pv2qx" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.784630 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-xjwkw" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.784745 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.788002 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.796381 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7s8s5"] Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.800786 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.806151 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pv2qx"] Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.839545 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7s8s5"] Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.908881 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-combined-ca-bundle\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.908958 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-log-ovn\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909005 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-run\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909033 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909061 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-ovn-controller-tls-certs\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909188 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-scripts\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909215 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-lib\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909256 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run-ovn\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909344 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-log\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909405 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxm67\" (UniqueName: \"kubernetes.io/projected/389cf061-3e03-4e54-bf97-c88a747fd18b-kube-api-access-vxm67\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909438 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5f6\" (UniqueName: \"kubernetes.io/projected/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-kube-api-access-zp5f6\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909498 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-etc-ovs\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:15:59 crc kubenswrapper[5049]: I0127 17:15:59.909516 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/389cf061-3e03-4e54-bf97-c88a747fd18b-scripts\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011072 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-log\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011125 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxm67\" (UniqueName: \"kubernetes.io/projected/389cf061-3e03-4e54-bf97-c88a747fd18b-kube-api-access-vxm67\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011150 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp5f6\" (UniqueName: \"kubernetes.io/projected/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-kube-api-access-zp5f6\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011184 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-etc-ovs\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011202 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/389cf061-3e03-4e54-bf97-c88a747fd18b-scripts\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011240 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-combined-ca-bundle\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011273 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-log-ovn\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011288 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-run\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011309 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011333 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-ovn-controller-tls-certs\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011370 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-scripts\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011389 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-lib\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.011408 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run-ovn\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.012123 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.012194 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-log-ovn\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.012297 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-log\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.012295 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-run\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.013137 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run-ovn\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.014691 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-etc-ovs\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.014849 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/389cf061-3e03-4e54-bf97-c88a747fd18b-scripts\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.015419 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-scripts\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.015717 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-lib\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.022816 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-ovn-controller-tls-certs\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.023204 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-combined-ca-bundle\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.031480 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxm67\" (UniqueName: \"kubernetes.io/projected/389cf061-3e03-4e54-bf97-c88a747fd18b-kube-api-access-vxm67\") pod \"ovn-controller-pv2qx\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.034577 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp5f6\" (UniqueName: \"kubernetes.io/projected/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-kube-api-access-zp5f6\") pod \"ovn-controller-ovs-7s8s5\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.100314 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.134221 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.670773 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.672269 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.674258 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.675073 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.675076 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6njf2" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.675373 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.675719 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.690379 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.824998 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.825100 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjjf\" (UniqueName: \"kubernetes.io/projected/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-kube-api-access-dxjjf\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.825136 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.825188 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.825285 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.825328 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-config\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.825360 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.825411 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926442 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926498 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-config\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926546 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926619 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926663 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926791 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxjjf\" (UniqueName: \"kubernetes.io/projected/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-kube-api-access-dxjjf\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926826 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926862 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.926949 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.927133 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.927221 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-config\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.928409 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.932836 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.933520 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.941594 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.944868 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxjjf\" (UniqueName: \"kubernetes.io/projected/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-kube-api-access-dxjjf\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:00 crc kubenswrapper[5049]: I0127 17:16:00.959514 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:01 crc kubenswrapper[5049]: I0127 17:16:01.006264 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:02 crc kubenswrapper[5049]: E0127 17:16:02.978134 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 17:16:02 crc kubenswrapper[5049]: E0127 17:16:02.978845 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-smfwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-b5c6q_openstack(38791ef9-a379-4760-9320-403bf57b0199): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 17:16:02 crc kubenswrapper[5049]: E0127 17:16:02.980178 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" podUID="38791ef9-a379-4760-9320-403bf57b0199" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.489028 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.557153 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.558906 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.560255 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.560896 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-km7sz" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.561159 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.580899 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.597597 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.604066 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.672777 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.686803 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.687526 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.687562 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4tbf\" (UniqueName: \"kubernetes.io/projected/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-kube-api-access-k4tbf\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.687598 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.687623 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-config\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.687637 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.687652 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.687685 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.687708 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.701857 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pv2qx"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.788687 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.788743 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4tbf\" (UniqueName: \"kubernetes.io/projected/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-kube-api-access-k4tbf\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.788788 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.788812 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-config\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.788828 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.788842 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.788859 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.788881 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.790095 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.791079 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.791232 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-config\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.791428 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.794841 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.794875 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.795489 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.807155 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4tbf\" (UniqueName: \"kubernetes.io/projected/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-kube-api-access-k4tbf\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.819029 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.853566 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.860344 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.861906 5049 generic.go:334] "Generic (PLEG): container finished" podID="f377b5aa-3901-48ee-a680-81d820af1d56" containerID="e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5" exitCode=0 Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.862029 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" event={"ID":"f377b5aa-3901-48ee-a680-81d820af1d56","Type":"ContainerDied","Data":"e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5"} Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.864911 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de39a65a-7265-4418-a94b-f8f8f30c3807","Type":"ContainerStarted","Data":"0dadf27e64cc64f8c046663c1ca2d44a658dcc8e4671b691f81c9bc864dcec3e"} Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.866729 5049 generic.go:334] "Generic (PLEG): container finished" podID="c3319e0c-70bb-4932-8408-e633569624c8" containerID="b5a2e5ba0b5a07d3bef1d7dfe0e0e0cf1eec4aad628e56a3f3a24bd46f764c1d" exitCode=0 Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.866764 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" event={"ID":"c3319e0c-70bb-4932-8408-e633569624c8","Type":"ContainerDied","Data":"b5a2e5ba0b5a07d3bef1d7dfe0e0e0cf1eec4aad628e56a3f3a24bd46f764c1d"} Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.869698 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbb24b4b-dfbd-431f-8244-098c40f7c24f","Type":"ContainerStarted","Data":"e82514d0c463243a362e8f448012e954befa69bf19834cae92acea6cf9239bc7"} Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.871332 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pv2qx" event={"ID":"389cf061-3e03-4e54-bf97-c88a747fd18b","Type":"ContainerStarted","Data":"bc9c1b18296b33c6aedf49a84f5e80627a49a399d8d320932db333159b09c46b"} Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.872285 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62ffcfe9-3e93-48ee-8d03-9b653d1bfede","Type":"ContainerStarted","Data":"e0bb3f2dbaf364487d744f22f95a1db0b0f24769e9cdbed2ab3cc9c64857b3f3"} Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.875607 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4962a09-aea5-455e-8620-b83f9ede60e5","Type":"ContainerStarted","Data":"a4661c69b9373cd4557cfdd633e7ab04376e716ecc51ef8d2dbbf8813e7e5a55"} Jan 27 17:16:03 crc kubenswrapper[5049]: I0127 17:16:03.893976 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.031446 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 17:16:04 crc kubenswrapper[5049]: E0127 17:16:04.275203 5049 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 17:16:04 crc kubenswrapper[5049]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f377b5aa-3901-48ee-a680-81d820af1d56/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 17:16:04 crc kubenswrapper[5049]: > podSandboxID="4783fb201ae7114a23a0258ffd85aacd7be9b7646004d0f302680596e156fd56" Jan 27 17:16:04 crc kubenswrapper[5049]: E0127 17:16:04.275719 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:16:04 crc kubenswrapper[5049]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxrdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-4z9fb_openstack(f377b5aa-3901-48ee-a680-81d820af1d56): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f377b5aa-3901-48ee-a680-81d820af1d56/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 17:16:04 crc kubenswrapper[5049]: > logger="UnhandledError" Jan 27 17:16:04 crc kubenswrapper[5049]: E0127 17:16:04.278401 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f377b5aa-3901-48ee-a680-81d820af1d56/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" podUID="f377b5aa-3901-48ee-a680-81d820af1d56" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.320551 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.402925 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-dns-svc\") pod \"38791ef9-a379-4760-9320-403bf57b0199\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.403055 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smfwj\" (UniqueName: \"kubernetes.io/projected/38791ef9-a379-4760-9320-403bf57b0199-kube-api-access-smfwj\") pod \"38791ef9-a379-4760-9320-403bf57b0199\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.403117 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-config\") pod \"38791ef9-a379-4760-9320-403bf57b0199\" (UID: \"38791ef9-a379-4760-9320-403bf57b0199\") " Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.403665 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38791ef9-a379-4760-9320-403bf57b0199" (UID: "38791ef9-a379-4760-9320-403bf57b0199"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.403922 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-config" (OuterVolumeSpecName: "config") pod "38791ef9-a379-4760-9320-403bf57b0199" (UID: "38791ef9-a379-4760-9320-403bf57b0199"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.418981 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38791ef9-a379-4760-9320-403bf57b0199-kube-api-access-smfwj" (OuterVolumeSpecName: "kube-api-access-smfwj") pod "38791ef9-a379-4760-9320-403bf57b0199" (UID: "38791ef9-a379-4760-9320-403bf57b0199"). InnerVolumeSpecName "kube-api-access-smfwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.505366 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smfwj\" (UniqueName: \"kubernetes.io/projected/38791ef9-a379-4760-9320-403bf57b0199-kube-api-access-smfwj\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.505478 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.505491 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38791ef9-a379-4760-9320-403bf57b0199-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.558997 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.817976 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7s8s5"] Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.883161 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28327cb6-87a9-4b24-b8fb-f43c33076b1b","Type":"ContainerStarted","Data":"39f95b3a35ca5a16bb2f41054b4b2fd9b049f49c3702b34e41b0d26dd9cb8170"} Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.885222 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95574d5f-6872-4ff3-a7a4-44a960bb46f0","Type":"ContainerStarted","Data":"e6ad8c1f1f2979229b70f694b11fb76b6455e02a363c81fb2d6c41a8797289fb"} Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.886578 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9fff683-8d1a-4a8c-b45f-8846c09a6f51","Type":"ContainerStarted","Data":"f05704ca09d5244d5d2ba51448fdc91ee5390f128c05aee4a283d2cdda182bce"} Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.889433 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" event={"ID":"c3319e0c-70bb-4932-8408-e633569624c8","Type":"ContainerStarted","Data":"d1fad14bb015e86fa3544160d4e6094edc0404b44feedf7b17ae0019ff753b42"} Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.889546 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.892109 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.892117 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-b5c6q" event={"ID":"38791ef9-a379-4760-9320-403bf57b0199","Type":"ContainerDied","Data":"119a7b654e128e42ad0b1e6f2b10be091c0eddf6916a7b6ff42621870e25dc53"} Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.893883 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d","Type":"ContainerStarted","Data":"6728bd8422561c26b26822a9ec1e7908e7bf65ce97cbda63ec857f3e88033fd1"} Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.939411 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" podStartSLOduration=3.212449925 podStartE2EDuration="14.939396948s" podCreationTimestamp="2026-01-27 17:15:50 +0000 UTC" firstStartedPulling="2026-01-27 17:15:51.367389714 +0000 UTC m=+1126.466363273" lastFinishedPulling="2026-01-27 17:16:03.094336747 +0000 UTC m=+1138.193310296" observedRunningTime="2026-01-27 17:16:04.905009996 +0000 UTC m=+1140.003983545" watchObservedRunningTime="2026-01-27 17:16:04.939396948 +0000 UTC m=+1140.038370497" Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.989828 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-b5c6q"] Jan 27 17:16:04 crc kubenswrapper[5049]: I0127 17:16:04.991490 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-b5c6q"] Jan 27 17:16:05 crc kubenswrapper[5049]: I0127 17:16:05.656275 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38791ef9-a379-4760-9320-403bf57b0199" path="/var/lib/kubelet/pods/38791ef9-a379-4760-9320-403bf57b0199/volumes" Jan 27 17:16:05 crc kubenswrapper[5049]: I0127 17:16:05.900530 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7s8s5" event={"ID":"009eaa47-1d7c-46e6-aeea-b25f77ea35a9","Type":"ContainerStarted","Data":"bcc01a0403691fbcf568a97f26783937c01c259b6c035aa9b7379ca70667d7f0"} Jan 27 17:16:10 crc kubenswrapper[5049]: I0127 17:16:10.888812 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:16:10 crc kubenswrapper[5049]: I0127 17:16:10.972213 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4z9fb"] Jan 27 17:16:11 crc kubenswrapper[5049]: I0127 17:16:11.960467 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28327cb6-87a9-4b24-b8fb-f43c33076b1b","Type":"ContainerStarted","Data":"1aad04186f3f290c52f9e3c6f44246f78807c70f72c854c4cfa401d9f8b67ba3"} Jan 27 17:16:11 crc kubenswrapper[5049]: I0127 17:16:11.961310 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 17:16:11 crc kubenswrapper[5049]: I0127 17:16:11.965435 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" event={"ID":"f377b5aa-3901-48ee-a680-81d820af1d56","Type":"ContainerStarted","Data":"3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225"} Jan 27 17:16:11 crc kubenswrapper[5049]: I0127 17:16:11.965584 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:16:11 crc kubenswrapper[5049]: I0127 17:16:11.965637 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" podUID="f377b5aa-3901-48ee-a680-81d820af1d56" containerName="dnsmasq-dns" containerID="cri-o://3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225" gracePeriod=10 Jan 27 17:16:11 crc kubenswrapper[5049]: I0127 17:16:11.994290 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" podStartSLOduration=9.999374663 podStartE2EDuration="21.994269154s" podCreationTimestamp="2026-01-27 17:15:50 +0000 UTC" firstStartedPulling="2026-01-27 17:15:51.102231337 +0000 UTC m=+1126.201204886" lastFinishedPulling="2026-01-27 17:16:03.097125828 +0000 UTC m=+1138.196099377" observedRunningTime="2026-01-27 17:16:11.992933015 +0000 UTC m=+1147.091906564" watchObservedRunningTime="2026-01-27 17:16:11.994269154 +0000 UTC m=+1147.093242703" Jan 27 17:16:11 crc kubenswrapper[5049]: I0127 17:16:11.997933 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.016787129 podStartE2EDuration="17.997921309s" podCreationTimestamp="2026-01-27 17:15:54 +0000 UTC" firstStartedPulling="2026-01-27 17:16:03.861531633 +0000 UTC m=+1138.960505172" lastFinishedPulling="2026-01-27 17:16:10.842665803 +0000 UTC m=+1145.941639352" observedRunningTime="2026-01-27 17:16:11.978090737 +0000 UTC m=+1147.077064276" watchObservedRunningTime="2026-01-27 17:16:11.997921309 +0000 UTC m=+1147.096894858" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.426221 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.566789 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxrdd\" (UniqueName: \"kubernetes.io/projected/f377b5aa-3901-48ee-a680-81d820af1d56-kube-api-access-wxrdd\") pod \"f377b5aa-3901-48ee-a680-81d820af1d56\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.566948 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-dns-svc\") pod \"f377b5aa-3901-48ee-a680-81d820af1d56\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.566982 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-config\") pod \"f377b5aa-3901-48ee-a680-81d820af1d56\" (UID: \"f377b5aa-3901-48ee-a680-81d820af1d56\") " Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.573459 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f377b5aa-3901-48ee-a680-81d820af1d56-kube-api-access-wxrdd" (OuterVolumeSpecName: "kube-api-access-wxrdd") pod "f377b5aa-3901-48ee-a680-81d820af1d56" (UID: "f377b5aa-3901-48ee-a680-81d820af1d56"). InnerVolumeSpecName "kube-api-access-wxrdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.613761 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f377b5aa-3901-48ee-a680-81d820af1d56" (UID: "f377b5aa-3901-48ee-a680-81d820af1d56"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.624700 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-config" (OuterVolumeSpecName: "config") pod "f377b5aa-3901-48ee-a680-81d820af1d56" (UID: "f377b5aa-3901-48ee-a680-81d820af1d56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.669661 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.670028 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f377b5aa-3901-48ee-a680-81d820af1d56-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.670149 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxrdd\" (UniqueName: \"kubernetes.io/projected/f377b5aa-3901-48ee-a680-81d820af1d56-kube-api-access-wxrdd\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.940861 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-m6m76"] Jan 27 17:16:12 crc kubenswrapper[5049]: E0127 17:16:12.941524 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f377b5aa-3901-48ee-a680-81d820af1d56" containerName="dnsmasq-dns" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.941551 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f377b5aa-3901-48ee-a680-81d820af1d56" containerName="dnsmasq-dns" Jan 27 17:16:12 crc kubenswrapper[5049]: E0127 17:16:12.941587 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f377b5aa-3901-48ee-a680-81d820af1d56" containerName="init" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.941595 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f377b5aa-3901-48ee-a680-81d820af1d56" containerName="init" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.943352 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f377b5aa-3901-48ee-a680-81d820af1d56" containerName="dnsmasq-dns" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.945614 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.950347 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.968368 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-m6m76"] Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.977616 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g6x5\" (UniqueName: \"kubernetes.io/projected/df9b6856-04b3-4630-b200-d99636bdb2fb-kube-api-access-5g6x5\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.977691 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-combined-ca-bundle\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.977736 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovs-rundir\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.977764 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.977854 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovn-rundir\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:12 crc kubenswrapper[5049]: I0127 17:16:12.977881 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df9b6856-04b3-4630-b200-d99636bdb2fb-config\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.024156 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pv2qx" event={"ID":"389cf061-3e03-4e54-bf97-c88a747fd18b","Type":"ContainerStarted","Data":"b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.024234 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.033247 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d","Type":"ContainerStarted","Data":"626c86acb733344d07e343f4289761a9f30520eda1c48c93eebace6d3cdd0601"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.052954 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pv2qx" podStartSLOduration=6.262149474 podStartE2EDuration="14.052935694s" podCreationTimestamp="2026-01-27 17:15:59 +0000 UTC" firstStartedPulling="2026-01-27 17:16:03.704528025 +0000 UTC m=+1138.803501574" lastFinishedPulling="2026-01-27 17:16:11.495314235 +0000 UTC m=+1146.594287794" observedRunningTime="2026-01-27 17:16:13.052223414 +0000 UTC m=+1148.151196973" watchObservedRunningTime="2026-01-27 17:16:13.052935694 +0000 UTC m=+1148.151909243" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.055063 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4962a09-aea5-455e-8620-b83f9ede60e5","Type":"ContainerStarted","Data":"83f6029dc42a366c9752e1aa6f03886c6ce220b7fe1e10f9085bca7560faa674"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.055614 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.060779 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9fff683-8d1a-4a8c-b45f-8846c09a6f51","Type":"ContainerStarted","Data":"c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.061984 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de39a65a-7265-4418-a94b-f8f8f30c3807","Type":"ContainerStarted","Data":"f7c4e00da4c82ddc9caf7e857d44fe671979a2b534b5f360c2075b65d6c610d1"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.066065 5049 generic.go:334] "Generic (PLEG): container finished" podID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerID="5e17a6ba9e15c2c27dc5039bb862db1d335a29b0f52813e7652909d081479ad1" exitCode=0 Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.066109 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7s8s5" event={"ID":"009eaa47-1d7c-46e6-aeea-b25f77ea35a9","Type":"ContainerDied","Data":"5e17a6ba9e15c2c27dc5039bb862db1d335a29b0f52813e7652909d081479ad1"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.079659 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g6x5\" (UniqueName: \"kubernetes.io/projected/df9b6856-04b3-4630-b200-d99636bdb2fb-kube-api-access-5g6x5\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.079717 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-combined-ca-bundle\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.079750 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovs-rundir\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.079769 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.079840 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovn-rundir\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.079859 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df9b6856-04b3-4630-b200-d99636bdb2fb-config\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.080587 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df9b6856-04b3-4630-b200-d99636bdb2fb-config\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.081129 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovn-rundir\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.081364 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovs-rundir\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.098957 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hcdx2"] Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.100178 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.103206 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.114056 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-combined-ca-bundle\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.114734 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.115579 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.127375998 podStartE2EDuration="17.11555775s" podCreationTimestamp="2026-01-27 17:15:56 +0000 UTC" firstStartedPulling="2026-01-27 17:16:03.701380174 +0000 UTC m=+1138.800353713" lastFinishedPulling="2026-01-27 17:16:11.689561906 +0000 UTC m=+1146.788535465" observedRunningTime="2026-01-27 17:16:13.080313504 +0000 UTC m=+1148.179287053" watchObservedRunningTime="2026-01-27 17:16:13.11555775 +0000 UTC m=+1148.214531299" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.116782 5049 generic.go:334] "Generic (PLEG): container finished" podID="f377b5aa-3901-48ee-a680-81d820af1d56" containerID="3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225" exitCode=0 Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.116918 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.117326 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" event={"ID":"f377b5aa-3901-48ee-a680-81d820af1d56","Type":"ContainerDied","Data":"3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.117388 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-4z9fb" event={"ID":"f377b5aa-3901-48ee-a680-81d820af1d56","Type":"ContainerDied","Data":"4783fb201ae7114a23a0258ffd85aacd7be9b7646004d0f302680596e156fd56"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.117430 5049 scope.go:117] "RemoveContainer" containerID="3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.118954 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g6x5\" (UniqueName: \"kubernetes.io/projected/df9b6856-04b3-4630-b200-d99636bdb2fb-kube-api-access-5g6x5\") pod \"ovn-controller-metrics-m6m76\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.130215 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95574d5f-6872-4ff3-a7a4-44a960bb46f0","Type":"ContainerStarted","Data":"7c0aed498f47898c511db9b6a1e7b505874797ec15911d29135932092dbc34ca"} Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.135441 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hcdx2"] Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.230811 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hcdx2"] Jan 27 17:16:13 crc kubenswrapper[5049]: E0127 17:16:13.250117 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-qmpqv ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" podUID="3d96cf6b-2335-4e03-857c-a87907785259" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.265077 5049 scope.go:117] "RemoveContainer" containerID="e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.266268 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.331359 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-config\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.331527 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.331720 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmpqv\" (UniqueName: \"kubernetes.io/projected/3d96cf6b-2335-4e03-857c-a87907785259-kube-api-access-qmpqv\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.331784 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.348839 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bjbp"] Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.365798 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.366586 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bjbp"] Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.367819 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.400556 5049 scope.go:117] "RemoveContainer" containerID="3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225" Jan 27 17:16:13 crc kubenswrapper[5049]: E0127 17:16:13.400905 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225\": container with ID starting with 3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225 not found: ID does not exist" containerID="3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.400943 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225"} err="failed to get container status \"3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225\": rpc error: code = NotFound desc = could not find container \"3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225\": container with ID starting with 3fde465a2e515ce999dfd68efa3ee3718579ffaad9844624a0a4826d8ed94225 not found: ID does not exist" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.400965 5049 scope.go:117] "RemoveContainer" containerID="e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5" Jan 27 17:16:13 crc kubenswrapper[5049]: E0127 17:16:13.408111 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5\": container with ID starting with e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5 not found: ID does not exist" containerID="e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.408152 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5"} err="failed to get container status \"e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5\": rpc error: code = NotFound desc = could not find container \"e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5\": container with ID starting with e508260117911efe6a41c619657fb398cddb38b6c461d8df6c3fe25ea777a1e5 not found: ID does not exist" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.431545 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4z9fb"] Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.435163 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmpqv\" (UniqueName: \"kubernetes.io/projected/3d96cf6b-2335-4e03-857c-a87907785259-kube-api-access-qmpqv\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.435221 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.435286 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-config\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.435316 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.436111 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.436957 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.437621 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-config\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.439781 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-4z9fb"] Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.454286 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmpqv\" (UniqueName: \"kubernetes.io/projected/3d96cf6b-2335-4e03-857c-a87907785259-kube-api-access-qmpqv\") pod \"dnsmasq-dns-7fd796d7df-hcdx2\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.537361 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.537561 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-config\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.537665 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhkk4\" (UniqueName: \"kubernetes.io/projected/c2e801e9-4180-412c-83c2-c2871b506588-kube-api-access-dhkk4\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.537745 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.537792 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.640264 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-config\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.641526 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-config\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.641590 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhkk4\" (UniqueName: \"kubernetes.io/projected/c2e801e9-4180-412c-83c2-c2871b506588-kube-api-access-dhkk4\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.641701 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.643118 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.643873 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.643890 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.644055 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.645026 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.671521 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhkk4\" (UniqueName: \"kubernetes.io/projected/c2e801e9-4180-412c-83c2-c2871b506588-kube-api-access-dhkk4\") pod \"dnsmasq-dns-86db49b7ff-7bjbp\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.672334 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f377b5aa-3901-48ee-a680-81d820af1d56" path="/var/lib/kubelet/pods/f377b5aa-3901-48ee-a680-81d820af1d56/volumes" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.726234 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:13 crc kubenswrapper[5049]: I0127 17:16:13.909489 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-m6m76"] Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.142927 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62ffcfe9-3e93-48ee-8d03-9b653d1bfede","Type":"ContainerStarted","Data":"a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283"} Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.147784 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m6m76" event={"ID":"df9b6856-04b3-4630-b200-d99636bdb2fb","Type":"ContainerStarted","Data":"35fcd5ff80273dfdb97e6e8dcc190bfe7218ae99bd53a4e261f8557845a4005e"} Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.149413 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbb24b4b-dfbd-431f-8244-098c40f7c24f","Type":"ContainerStarted","Data":"a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b"} Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.153015 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.153020 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7s8s5" event={"ID":"009eaa47-1d7c-46e6-aeea-b25f77ea35a9","Type":"ContainerStarted","Data":"c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4"} Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.153077 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7s8s5" event={"ID":"009eaa47-1d7c-46e6-aeea-b25f77ea35a9","Type":"ContainerStarted","Data":"286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5"} Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.161085 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.199189 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7s8s5" podStartSLOduration=8.852457558 podStartE2EDuration="15.199167571s" podCreationTimestamp="2026-01-27 17:15:59 +0000 UTC" firstStartedPulling="2026-01-27 17:16:05.148461427 +0000 UTC m=+1140.247434976" lastFinishedPulling="2026-01-27 17:16:11.49517143 +0000 UTC m=+1146.594144989" observedRunningTime="2026-01-27 17:16:14.191097718 +0000 UTC m=+1149.290071267" watchObservedRunningTime="2026-01-27 17:16:14.199167571 +0000 UTC m=+1149.298141120" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.217588 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bjbp"] Jan 27 17:16:14 crc kubenswrapper[5049]: W0127 17:16:14.227772 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2e801e9_4180_412c_83c2_c2871b506588.slice/crio-4609e644de365c65f7f3d8901ead0893e53e63d7af2e27fac16c691e783236e4 WatchSource:0}: Error finding container 4609e644de365c65f7f3d8901ead0893e53e63d7af2e27fac16c691e783236e4: Status 404 returned error can't find the container with id 4609e644de365c65f7f3d8901ead0893e53e63d7af2e27fac16c691e783236e4 Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.252990 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-config\") pod \"3d96cf6b-2335-4e03-857c-a87907785259\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.253276 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-ovsdbserver-nb\") pod \"3d96cf6b-2335-4e03-857c-a87907785259\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.253320 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmpqv\" (UniqueName: \"kubernetes.io/projected/3d96cf6b-2335-4e03-857c-a87907785259-kube-api-access-qmpqv\") pod \"3d96cf6b-2335-4e03-857c-a87907785259\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.253430 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-dns-svc\") pod \"3d96cf6b-2335-4e03-857c-a87907785259\" (UID: \"3d96cf6b-2335-4e03-857c-a87907785259\") " Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.253713 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-config" (OuterVolumeSpecName: "config") pod "3d96cf6b-2335-4e03-857c-a87907785259" (UID: "3d96cf6b-2335-4e03-857c-a87907785259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.253970 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d96cf6b-2335-4e03-857c-a87907785259" (UID: "3d96cf6b-2335-4e03-857c-a87907785259"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.254065 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d96cf6b-2335-4e03-857c-a87907785259" (UID: "3d96cf6b-2335-4e03-857c-a87907785259"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.256883 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.256904 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.256915 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d96cf6b-2335-4e03-857c-a87907785259-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.258709 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d96cf6b-2335-4e03-857c-a87907785259-kube-api-access-qmpqv" (OuterVolumeSpecName: "kube-api-access-qmpqv") pod "3d96cf6b-2335-4e03-857c-a87907785259" (UID: "3d96cf6b-2335-4e03-857c-a87907785259"). InnerVolumeSpecName "kube-api-access-qmpqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:14 crc kubenswrapper[5049]: I0127 17:16:14.358118 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmpqv\" (UniqueName: \"kubernetes.io/projected/3d96cf6b-2335-4e03-857c-a87907785259-kube-api-access-qmpqv\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.134385 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.134737 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.165940 5049 generic.go:334] "Generic (PLEG): container finished" podID="c2e801e9-4180-412c-83c2-c2871b506588" containerID="585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a" exitCode=0 Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.165989 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" event={"ID":"c2e801e9-4180-412c-83c2-c2871b506588","Type":"ContainerDied","Data":"585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a"} Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.166079 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" event={"ID":"c2e801e9-4180-412c-83c2-c2871b506588","Type":"ContainerStarted","Data":"4609e644de365c65f7f3d8901ead0893e53e63d7af2e27fac16c691e783236e4"} Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.166235 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hcdx2" Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.223956 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hcdx2"] Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.233296 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hcdx2"] Jan 27 17:16:15 crc kubenswrapper[5049]: I0127 17:16:15.656306 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d96cf6b-2335-4e03-857c-a87907785259" path="/var/lib/kubelet/pods/3d96cf6b-2335-4e03-857c-a87907785259/volumes" Jan 27 17:16:16 crc kubenswrapper[5049]: I0127 17:16:16.185145 5049 generic.go:334] "Generic (PLEG): container finished" podID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" containerID="7c0aed498f47898c511db9b6a1e7b505874797ec15911d29135932092dbc34ca" exitCode=0 Jan 27 17:16:16 crc kubenswrapper[5049]: I0127 17:16:16.185266 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95574d5f-6872-4ff3-a7a4-44a960bb46f0","Type":"ContainerDied","Data":"7c0aed498f47898c511db9b6a1e7b505874797ec15911d29135932092dbc34ca"} Jan 27 17:16:16 crc kubenswrapper[5049]: I0127 17:16:16.189796 5049 generic.go:334] "Generic (PLEG): container finished" podID="de39a65a-7265-4418-a94b-f8f8f30c3807" containerID="f7c4e00da4c82ddc9caf7e857d44fe671979a2b534b5f360c2075b65d6c610d1" exitCode=0 Jan 27 17:16:16 crc kubenswrapper[5049]: I0127 17:16:16.190222 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de39a65a-7265-4418-a94b-f8f8f30c3807","Type":"ContainerDied","Data":"f7c4e00da4c82ddc9caf7e857d44fe671979a2b534b5f360c2075b65d6c610d1"} Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.200809 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9fff683-8d1a-4a8c-b45f-8846c09a6f51","Type":"ContainerStarted","Data":"5662e99c2eaeb51406aed793385fad5230b1d5921534ab4909d10f8d999bf0f2"} Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.202785 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m6m76" event={"ID":"df9b6856-04b3-4630-b200-d99636bdb2fb","Type":"ContainerStarted","Data":"17f19c76a2ac6d447b4808202544c8e5fab56d8363f7e4b4d465252ee3ed9eb6"} Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.205578 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de39a65a-7265-4418-a94b-f8f8f30c3807","Type":"ContainerStarted","Data":"4c24478588bff8e90f1e2d67898dd6f439331bc9e8f594f828123fbc7e460d13"} Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.210101 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d","Type":"ContainerStarted","Data":"21f5e0dc07fb3d38bb7e19a51a4c0dbf807f4111a67586e4958d5638d23ad1b4"} Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.212919 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" event={"ID":"c2e801e9-4180-412c-83c2-c2871b506588","Type":"ContainerStarted","Data":"a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e"} Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.213459 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.215502 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95574d5f-6872-4ff3-a7a4-44a960bb46f0","Type":"ContainerStarted","Data":"2396b9674bf7c0eb9526c0c351d8d2c08f432f905d450d6c35283d1d84ab9751"} Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.240757 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.454895013 podStartE2EDuration="15.240738207s" podCreationTimestamp="2026-01-27 17:16:02 +0000 UTC" firstStartedPulling="2026-01-27 17:16:04.572762254 +0000 UTC m=+1139.671735793" lastFinishedPulling="2026-01-27 17:16:16.358605438 +0000 UTC m=+1151.457578987" observedRunningTime="2026-01-27 17:16:17.22973682 +0000 UTC m=+1152.328710409" watchObservedRunningTime="2026-01-27 17:16:17.240738207 +0000 UTC m=+1152.339711766" Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.285291 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" podStartSLOduration=4.285273191 podStartE2EDuration="4.285273191s" podCreationTimestamp="2026-01-27 17:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:16:17.257577823 +0000 UTC m=+1152.356551412" watchObservedRunningTime="2026-01-27 17:16:17.285273191 +0000 UTC m=+1152.384246740" Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.290822 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.902573611 podStartE2EDuration="26.290804831s" podCreationTimestamp="2026-01-27 17:15:51 +0000 UTC" firstStartedPulling="2026-01-27 17:16:03.728907518 +0000 UTC m=+1138.827881077" lastFinishedPulling="2026-01-27 17:16:11.117138748 +0000 UTC m=+1146.216112297" observedRunningTime="2026-01-27 17:16:17.283983254 +0000 UTC m=+1152.382956803" watchObservedRunningTime="2026-01-27 17:16:17.290804831 +0000 UTC m=+1152.389778370" Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.322741 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.122318048 podStartE2EDuration="24.322716681s" podCreationTimestamp="2026-01-27 17:15:53 +0000 UTC" firstStartedPulling="2026-01-27 17:16:03.861721048 +0000 UTC m=+1138.960694597" lastFinishedPulling="2026-01-27 17:16:11.062119681 +0000 UTC m=+1146.161093230" observedRunningTime="2026-01-27 17:16:17.317965794 +0000 UTC m=+1152.416939383" watchObservedRunningTime="2026-01-27 17:16:17.322716681 +0000 UTC m=+1152.421690260" Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.350130 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.987238318 podStartE2EDuration="18.350086181s" podCreationTimestamp="2026-01-27 17:15:59 +0000 UTC" firstStartedPulling="2026-01-27 17:16:04.066857455 +0000 UTC m=+1139.165831004" lastFinishedPulling="2026-01-27 17:16:16.429705328 +0000 UTC m=+1151.528678867" observedRunningTime="2026-01-27 17:16:17.344793718 +0000 UTC m=+1152.443767267" watchObservedRunningTime="2026-01-27 17:16:17.350086181 +0000 UTC m=+1152.449059730" Jan 27 17:16:17 crc kubenswrapper[5049]: I0127 17:16:17.377550 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-m6m76" podStartSLOduration=2.900470195 podStartE2EDuration="5.377527752s" podCreationTimestamp="2026-01-27 17:16:12 +0000 UTC" firstStartedPulling="2026-01-27 17:16:13.929606147 +0000 UTC m=+1149.028579696" lastFinishedPulling="2026-01-27 17:16:16.406663704 +0000 UTC m=+1151.505637253" observedRunningTime="2026-01-27 17:16:17.371375725 +0000 UTC m=+1152.470349294" watchObservedRunningTime="2026-01-27 17:16:17.377527752 +0000 UTC m=+1152.476501311" Jan 27 17:16:18 crc kubenswrapper[5049]: I0127 17:16:18.894484 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:18 crc kubenswrapper[5049]: I0127 17:16:18.894878 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:18 crc kubenswrapper[5049]: I0127 17:16:18.958608 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.007432 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.063142 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.233908 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.284739 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.294901 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.636753 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.637946 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.646024 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-thjcd" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.646568 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.646727 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.646871 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.663754 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.671777 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.671817 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.671841 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.671891 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-scripts\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.671919 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.671949 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-config\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.671985 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6pgt\" (UniqueName: \"kubernetes.io/projected/051db122-80f6-47fc-8d5c-5244d92e593d-kube-api-access-g6pgt\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.777035 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-scripts\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.777132 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.777211 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-config\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.777303 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6pgt\" (UniqueName: \"kubernetes.io/projected/051db122-80f6-47fc-8d5c-5244d92e593d-kube-api-access-g6pgt\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.777388 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.777421 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.777461 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.778852 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-scripts\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.779307 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.780697 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-config\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.786920 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.788200 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.792379 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.801109 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6pgt\" (UniqueName: \"kubernetes.io/projected/051db122-80f6-47fc-8d5c-5244d92e593d-kube-api-access-g6pgt\") pod \"ovn-northd-0\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " pod="openstack/ovn-northd-0" Jan 27 17:16:19 crc kubenswrapper[5049]: I0127 17:16:19.827872 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 17:16:20 crc kubenswrapper[5049]: I0127 17:16:20.017787 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 17:16:20 crc kubenswrapper[5049]: I0127 17:16:20.483542 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 17:16:20 crc kubenswrapper[5049]: W0127 17:16:20.486238 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod051db122_80f6_47fc_8d5c_5244d92e593d.slice/crio-8eebc4551fa1812d5581049820945aa29afebb166de480408ce90a328196b2f1 WatchSource:0}: Error finding container 8eebc4551fa1812d5581049820945aa29afebb166de480408ce90a328196b2f1: Status 404 returned error can't find the container with id 8eebc4551fa1812d5581049820945aa29afebb166de480408ce90a328196b2f1 Jan 27 17:16:21 crc kubenswrapper[5049]: I0127 17:16:21.246075 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"051db122-80f6-47fc-8d5c-5244d92e593d","Type":"ContainerStarted","Data":"8eebc4551fa1812d5581049820945aa29afebb166de480408ce90a328196b2f1"} Jan 27 17:16:22 crc kubenswrapper[5049]: I0127 17:16:22.257179 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"051db122-80f6-47fc-8d5c-5244d92e593d","Type":"ContainerStarted","Data":"bc748ff2fbd71fb24f80f8b730d7367d5fd71e407cbaf62490be6b914c76b0a8"} Jan 27 17:16:22 crc kubenswrapper[5049]: I0127 17:16:22.257588 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"051db122-80f6-47fc-8d5c-5244d92e593d","Type":"ContainerStarted","Data":"ffdb84acf31942996807c242b98114c9c8d67e2eeaa568117f878ad3675f41d8"} Jan 27 17:16:22 crc kubenswrapper[5049]: I0127 17:16:22.257617 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 17:16:22 crc kubenswrapper[5049]: I0127 17:16:22.283520 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.302749542 podStartE2EDuration="3.283491736s" podCreationTimestamp="2026-01-27 17:16:19 +0000 UTC" firstStartedPulling="2026-01-27 17:16:20.493108062 +0000 UTC m=+1155.592081641" lastFinishedPulling="2026-01-27 17:16:21.473850286 +0000 UTC m=+1156.572823835" observedRunningTime="2026-01-27 17:16:22.281346824 +0000 UTC m=+1157.380320393" watchObservedRunningTime="2026-01-27 17:16:22.283491736 +0000 UTC m=+1157.382465285" Jan 27 17:16:23 crc kubenswrapper[5049]: I0127 17:16:23.267109 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 17:16:23 crc kubenswrapper[5049]: I0127 17:16:23.267711 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 17:16:23 crc kubenswrapper[5049]: I0127 17:16:23.728848 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:23 crc kubenswrapper[5049]: I0127 17:16:23.801911 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-f9s5t"] Jan 27 17:16:23 crc kubenswrapper[5049]: I0127 17:16:23.803094 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" podUID="c3319e0c-70bb-4932-8408-e633569624c8" containerName="dnsmasq-dns" containerID="cri-o://d1fad14bb015e86fa3544160d4e6094edc0404b44feedf7b17ae0019ff753b42" gracePeriod=10 Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.122118 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.273444 5049 generic.go:334] "Generic (PLEG): container finished" podID="c3319e0c-70bb-4932-8408-e633569624c8" containerID="d1fad14bb015e86fa3544160d4e6094edc0404b44feedf7b17ae0019ff753b42" exitCode=0 Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.274507 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" event={"ID":"c3319e0c-70bb-4932-8408-e633569624c8","Type":"ContainerDied","Data":"d1fad14bb015e86fa3544160d4e6094edc0404b44feedf7b17ae0019ff753b42"} Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.347559 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.642621 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-g89zb"] Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.665847 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-31eb-account-create-update-hr76v"] Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.666086 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.670471 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.671427 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g89zb"] Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.673143 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.682413 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-31eb-account-create-update-hr76v"] Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.737532 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-g6sl5"] Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.738623 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.740153 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.740532 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.746390 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-g6sl5"] Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.766830 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv27p\" (UniqueName: \"kubernetes.io/projected/280bd899-5f8e-49a6-9ebc-32acff3c72e6-kube-api-access-xv27p\") pod \"keystone-31eb-account-create-update-hr76v\" (UID: \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\") " pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.766887 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280bd899-5f8e-49a6-9ebc-32acff3c72e6-operator-scripts\") pod \"keystone-31eb-account-create-update-hr76v\" (UID: \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\") " pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.766937 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fc2237-8102-4dce-ba61-c6466948289d-operator-scripts\") pod \"keystone-db-create-g89zb\" (UID: \"19fc2237-8102-4dce-ba61-c6466948289d\") " pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.767036 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xg8\" (UniqueName: \"kubernetes.io/projected/19fc2237-8102-4dce-ba61-c6466948289d-kube-api-access-g7xg8\") pod \"keystone-db-create-g89zb\" (UID: \"19fc2237-8102-4dce-ba61-c6466948289d\") " pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.833900 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8ec5-account-create-update-gdhjj"] Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.834914 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.837067 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.841459 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8ec5-account-create-update-gdhjj"] Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.859632 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.868757 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xg8\" (UniqueName: \"kubernetes.io/projected/19fc2237-8102-4dce-ba61-c6466948289d-kube-api-access-g7xg8\") pod \"keystone-db-create-g89zb\" (UID: \"19fc2237-8102-4dce-ba61-c6466948289d\") " pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.868811 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj9cb\" (UniqueName: \"kubernetes.io/projected/a00809fd-1407-4ccf-9cd5-09cc89ac751d-kube-api-access-hj9cb\") pod \"placement-db-create-g6sl5\" (UID: \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\") " pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.868837 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00809fd-1407-4ccf-9cd5-09cc89ac751d-operator-scripts\") pod \"placement-db-create-g6sl5\" (UID: \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\") " pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.869083 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv27p\" (UniqueName: \"kubernetes.io/projected/280bd899-5f8e-49a6-9ebc-32acff3c72e6-kube-api-access-xv27p\") pod \"keystone-31eb-account-create-update-hr76v\" (UID: \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\") " pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.869123 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280bd899-5f8e-49a6-9ebc-32acff3c72e6-operator-scripts\") pod \"keystone-31eb-account-create-update-hr76v\" (UID: \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\") " pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.869158 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fc2237-8102-4dce-ba61-c6466948289d-operator-scripts\") pod \"keystone-db-create-g89zb\" (UID: \"19fc2237-8102-4dce-ba61-c6466948289d\") " pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.870656 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fc2237-8102-4dce-ba61-c6466948289d-operator-scripts\") pod \"keystone-db-create-g89zb\" (UID: \"19fc2237-8102-4dce-ba61-c6466948289d\") " pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.871361 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280bd899-5f8e-49a6-9ebc-32acff3c72e6-operator-scripts\") pod \"keystone-31eb-account-create-update-hr76v\" (UID: \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\") " pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.888126 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv27p\" (UniqueName: \"kubernetes.io/projected/280bd899-5f8e-49a6-9ebc-32acff3c72e6-kube-api-access-xv27p\") pod \"keystone-31eb-account-create-update-hr76v\" (UID: \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\") " pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.889753 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xg8\" (UniqueName: \"kubernetes.io/projected/19fc2237-8102-4dce-ba61-c6466948289d-kube-api-access-g7xg8\") pod \"keystone-db-create-g89zb\" (UID: \"19fc2237-8102-4dce-ba61-c6466948289d\") " pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.970644 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4293b040-1fd4-4a5f-93e8-273d0d8509ac-operator-scripts\") pod \"placement-8ec5-account-create-update-gdhjj\" (UID: \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\") " pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.970927 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj9cb\" (UniqueName: \"kubernetes.io/projected/a00809fd-1407-4ccf-9cd5-09cc89ac751d-kube-api-access-hj9cb\") pod \"placement-db-create-g6sl5\" (UID: \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\") " pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.970979 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00809fd-1407-4ccf-9cd5-09cc89ac751d-operator-scripts\") pod \"placement-db-create-g6sl5\" (UID: \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\") " pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.971111 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gfql\" (UniqueName: \"kubernetes.io/projected/4293b040-1fd4-4a5f-93e8-273d0d8509ac-kube-api-access-4gfql\") pod \"placement-8ec5-account-create-update-gdhjj\" (UID: \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\") " pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.971825 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00809fd-1407-4ccf-9cd5-09cc89ac751d-operator-scripts\") pod \"placement-db-create-g6sl5\" (UID: \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\") " pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:24 crc kubenswrapper[5049]: I0127 17:16:24.986744 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj9cb\" (UniqueName: \"kubernetes.io/projected/a00809fd-1407-4ccf-9cd5-09cc89ac751d-kube-api-access-hj9cb\") pod \"placement-db-create-g6sl5\" (UID: \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\") " pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.013826 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.032511 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.059364 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.072310 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4293b040-1fd4-4a5f-93e8-273d0d8509ac-operator-scripts\") pod \"placement-8ec5-account-create-update-gdhjj\" (UID: \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\") " pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.072427 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gfql\" (UniqueName: \"kubernetes.io/projected/4293b040-1fd4-4a5f-93e8-273d0d8509ac-kube-api-access-4gfql\") pod \"placement-8ec5-account-create-update-gdhjj\" (UID: \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\") " pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.073103 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4293b040-1fd4-4a5f-93e8-273d0d8509ac-operator-scripts\") pod \"placement-8ec5-account-create-update-gdhjj\" (UID: \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\") " pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.090836 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gfql\" (UniqueName: \"kubernetes.io/projected/4293b040-1fd4-4a5f-93e8-273d0d8509ac-kube-api-access-4gfql\") pod \"placement-8ec5-account-create-update-gdhjj\" (UID: \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\") " pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.148224 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.378784 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.517551 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-g89zb"] Jan 27 17:16:25 crc kubenswrapper[5049]: W0127 17:16:25.517833 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19fc2237_8102_4dce_ba61_c6466948289d.slice/crio-2a2222d4e79e2834e382b0593a9506ea541d7b478aed2b157c313b06c8e08ef4 WatchSource:0}: Error finding container 2a2222d4e79e2834e382b0593a9506ea541d7b478aed2b157c313b06c8e08ef4: Status 404 returned error can't find the container with id 2a2222d4e79e2834e382b0593a9506ea541d7b478aed2b157c313b06c8e08ef4 Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.525130 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-31eb-account-create-update-hr76v"] Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.622558 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-g6sl5"] Jan 27 17:16:25 crc kubenswrapper[5049]: W0127 17:16:25.632419 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda00809fd_1407_4ccf_9cd5_09cc89ac751d.slice/crio-55e22e27507a988143c1a8a08b46374a2ae356680ca12e9b00db532496793493 WatchSource:0}: Error finding container 55e22e27507a988143c1a8a08b46374a2ae356680ca12e9b00db532496793493: Status 404 returned error can't find the container with id 55e22e27507a988143c1a8a08b46374a2ae356680ca12e9b00db532496793493 Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.729082 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8ec5-account-create-update-gdhjj"] Jan 27 17:16:25 crc kubenswrapper[5049]: W0127 17:16:25.738319 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4293b040_1fd4_4a5f_93e8_273d0d8509ac.slice/crio-f55e6276f9e6314b038a82e912cd00f0797522e9e190add1bbcff1424f6671d5 WatchSource:0}: Error finding container f55e6276f9e6314b038a82e912cd00f0797522e9e190add1bbcff1424f6671d5: Status 404 returned error can't find the container with id f55e6276f9e6314b038a82e912cd00f0797522e9e190add1bbcff1424f6671d5 Jan 27 17:16:25 crc kubenswrapper[5049]: I0127 17:16:25.888739 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" podUID="c3319e0c-70bb-4932-8408-e633569624c8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: connect: connection refused" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.292761 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g89zb" event={"ID":"19fc2237-8102-4dce-ba61-c6466948289d","Type":"ContainerStarted","Data":"2a2222d4e79e2834e382b0593a9506ea541d7b478aed2b157c313b06c8e08ef4"} Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.294018 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ec5-account-create-update-gdhjj" event={"ID":"4293b040-1fd4-4a5f-93e8-273d0d8509ac","Type":"ContainerStarted","Data":"f55e6276f9e6314b038a82e912cd00f0797522e9e190add1bbcff1424f6671d5"} Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.295559 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31eb-account-create-update-hr76v" event={"ID":"280bd899-5f8e-49a6-9ebc-32acff3c72e6","Type":"ContainerStarted","Data":"171951aebf5e176c383da813e18478d734696455e07c78ac8e25d1aeccc4c43f"} Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.297829 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-g6sl5" event={"ID":"a00809fd-1407-4ccf-9cd5-09cc89ac751d","Type":"ContainerStarted","Data":"55e22e27507a988143c1a8a08b46374a2ae356680ca12e9b00db532496793493"} Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.485640 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-6ncxg"] Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.487517 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.510405 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6ncxg"] Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.559159 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.606230 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-config\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.606269 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-dns-svc\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.606327 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.606349 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.606375 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6sj8\" (UniqueName: \"kubernetes.io/projected/47a64870-144b-4e50-a338-4a10e39333d2-kube-api-access-q6sj8\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.708076 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-config\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.708118 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-dns-svc\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.708179 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.708202 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.708225 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6sj8\" (UniqueName: \"kubernetes.io/projected/47a64870-144b-4e50-a338-4a10e39333d2-kube-api-access-q6sj8\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.709198 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-dns-svc\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.709274 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-config\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.710096 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.710131 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.727581 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6sj8\" (UniqueName: \"kubernetes.io/projected/47a64870-144b-4e50-a338-4a10e39333d2-kube-api-access-q6sj8\") pod \"dnsmasq-dns-698758b865-6ncxg\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:26 crc kubenswrapper[5049]: I0127 17:16:26.814625 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.261018 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6ncxg"] Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.305434 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6ncxg" event={"ID":"47a64870-144b-4e50-a338-4a10e39333d2","Type":"ContainerStarted","Data":"5b545a41fcaf9c5b2378bdb959f5b6fd264dbf3464c4b8c7700e8a473fd5cf4c"} Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.619840 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.628057 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.629873 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-42gm7" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.631869 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.633107 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.633108 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.658728 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.728852 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af4a67e-8714-4d41-ab32-7b2e526a0799-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.728897 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj9ms\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-kube-api-access-lj9ms\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.728916 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-lock\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.729065 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-cache\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.729111 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.729263 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.830800 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af4a67e-8714-4d41-ab32-7b2e526a0799-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.830844 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj9ms\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-kube-api-access-lj9ms\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.830865 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-lock\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.830939 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-cache\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.830959 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.831004 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: E0127 17:16:27.831188 5049 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 17:16:27 crc kubenswrapper[5049]: E0127 17:16:27.831222 5049 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 17:16:27 crc kubenswrapper[5049]: E0127 17:16:27.831282 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift podName:0af4a67e-8714-4d41-ab32-7b2e526a0799 nodeName:}" failed. No retries permitted until 2026-01-27 17:16:28.331261088 +0000 UTC m=+1163.430234637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift") pod "swift-storage-0" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799") : configmap "swift-ring-files" not found Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.831317 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.831500 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-lock\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.831566 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-cache\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.835734 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af4a67e-8714-4d41-ab32-7b2e526a0799-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.846387 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj9ms\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-kube-api-access-lj9ms\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:27 crc kubenswrapper[5049]: I0127 17:16:27.850175 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.165279 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hp2wf"] Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.181778 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.183149 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hp2wf"] Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.187064 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.187283 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.187708 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.238726 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2bn6v"] Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.239990 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.241967 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqm7r\" (UniqueName: \"kubernetes.io/projected/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-kube-api-access-rqm7r\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.242045 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-combined-ca-bundle\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.242079 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-dispersionconf\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.242109 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-etc-swift\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.242142 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-swiftconf\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.242179 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-scripts\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.242202 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-ring-data-devices\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.280038 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-hp2wf"] Jan 27 17:16:28 crc kubenswrapper[5049]: E0127 17:16:28.280584 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-rqm7r ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-hp2wf" podUID="6f6f9393-ec35-40b2-a900-85f55e5f2bd6" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.295976 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2bn6v"] Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.310444 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.320384 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.343558 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqm7r\" (UniqueName: \"kubernetes.io/projected/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-kube-api-access-rqm7r\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.343634 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.343664 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-ring-data-devices\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.343716 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-combined-ca-bundle\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.343750 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-scripts\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: E0127 17:16:28.343850 5049 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 17:16:28 crc kubenswrapper[5049]: E0127 17:16:28.343887 5049 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 17:16:28 crc kubenswrapper[5049]: E0127 17:16:28.343936 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift podName:0af4a67e-8714-4d41-ab32-7b2e526a0799 nodeName:}" failed. No retries permitted until 2026-01-27 17:16:29.343919913 +0000 UTC m=+1164.442893462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift") pod "swift-storage-0" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799") : configmap "swift-ring-files" not found Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344043 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-dispersionconf\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344084 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk2d8\" (UniqueName: \"kubernetes.io/projected/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-kube-api-access-pk2d8\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344120 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-etc-swift\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344161 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-swiftconf\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344190 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-swiftconf\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344216 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-combined-ca-bundle\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344253 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-etc-swift\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344292 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-scripts\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344328 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-ring-data-devices\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.344363 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-dispersionconf\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.345308 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-ring-data-devices\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.345374 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-etc-swift\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.345877 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-scripts\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.351263 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-combined-ca-bundle\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.356149 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-swiftconf\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.356159 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-dispersionconf\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.359215 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqm7r\" (UniqueName: \"kubernetes.io/projected/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-kube-api-access-rqm7r\") pod \"swift-ring-rebalance-hp2wf\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.445956 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqm7r\" (UniqueName: \"kubernetes.io/projected/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-kube-api-access-rqm7r\") pod \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.446552 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-ring-data-devices\") pod \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.446715 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-dispersionconf\") pod \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.446827 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-combined-ca-bundle\") pod \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.446943 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-swiftconf\") pod \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447043 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-etc-swift\") pod \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447058 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6f6f9393-ec35-40b2-a900-85f55e5f2bd6" (UID: "6f6f9393-ec35-40b2-a900-85f55e5f2bd6"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447237 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6f6f9393-ec35-40b2-a900-85f55e5f2bd6" (UID: "6f6f9393-ec35-40b2-a900-85f55e5f2bd6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447339 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-scripts\") pod \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\" (UID: \"6f6f9393-ec35-40b2-a900-85f55e5f2bd6\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447756 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-scripts" (OuterVolumeSpecName: "scripts") pod "6f6f9393-ec35-40b2-a900-85f55e5f2bd6" (UID: "6f6f9393-ec35-40b2-a900-85f55e5f2bd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447766 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-ring-data-devices\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447840 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-scripts\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447885 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk2d8\" (UniqueName: \"kubernetes.io/projected/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-kube-api-access-pk2d8\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447947 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-swiftconf\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.447972 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-combined-ca-bundle\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.448009 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-etc-swift\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.448055 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-dispersionconf\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.448126 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.448144 5049 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.448159 5049 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.448992 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-ring-data-devices\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.449056 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-scripts\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.449426 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-etc-swift\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.451377 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f6f9393-ec35-40b2-a900-85f55e5f2bd6" (UID: "6f6f9393-ec35-40b2-a900-85f55e5f2bd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.451439 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-kube-api-access-rqm7r" (OuterVolumeSpecName: "kube-api-access-rqm7r") pod "6f6f9393-ec35-40b2-a900-85f55e5f2bd6" (UID: "6f6f9393-ec35-40b2-a900-85f55e5f2bd6"). InnerVolumeSpecName "kube-api-access-rqm7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.452173 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-dispersionconf\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.452391 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6f6f9393-ec35-40b2-a900-85f55e5f2bd6" (UID: "6f6f9393-ec35-40b2-a900-85f55e5f2bd6"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.454550 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-swiftconf\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.456358 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-combined-ca-bundle\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.456464 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6f6f9393-ec35-40b2-a900-85f55e5f2bd6" (UID: "6f6f9393-ec35-40b2-a900-85f55e5f2bd6"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.466248 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk2d8\" (UniqueName: \"kubernetes.io/projected/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-kube-api-access-pk2d8\") pod \"swift-ring-rebalance-2bn6v\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.549253 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqm7r\" (UniqueName: \"kubernetes.io/projected/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-kube-api-access-rqm7r\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.549283 5049 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.549294 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.549305 5049 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6f6f9393-ec35-40b2-a900-85f55e5f2bd6-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.612039 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.720031 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.854162 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-config\") pod \"c3319e0c-70bb-4932-8408-e633569624c8\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.854600 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-dns-svc\") pod \"c3319e0c-70bb-4932-8408-e633569624c8\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.854642 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpphx\" (UniqueName: \"kubernetes.io/projected/c3319e0c-70bb-4932-8408-e633569624c8-kube-api-access-mpphx\") pod \"c3319e0c-70bb-4932-8408-e633569624c8\" (UID: \"c3319e0c-70bb-4932-8408-e633569624c8\") " Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.860560 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3319e0c-70bb-4932-8408-e633569624c8-kube-api-access-mpphx" (OuterVolumeSpecName: "kube-api-access-mpphx") pod "c3319e0c-70bb-4932-8408-e633569624c8" (UID: "c3319e0c-70bb-4932-8408-e633569624c8"). InnerVolumeSpecName "kube-api-access-mpphx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.890198 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-config" (OuterVolumeSpecName: "config") pod "c3319e0c-70bb-4932-8408-e633569624c8" (UID: "c3319e0c-70bb-4932-8408-e633569624c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.890364 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3319e0c-70bb-4932-8408-e633569624c8" (UID: "c3319e0c-70bb-4932-8408-e633569624c8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.956248 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.956275 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3319e0c-70bb-4932-8408-e633569624c8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:28 crc kubenswrapper[5049]: I0127 17:16:28.956284 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpphx\" (UniqueName: \"kubernetes.io/projected/c3319e0c-70bb-4932-8408-e633569624c8-kube-api-access-mpphx\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.117716 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2bn6v"] Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.319247 5049 generic.go:334] "Generic (PLEG): container finished" podID="4293b040-1fd4-4a5f-93e8-273d0d8509ac" containerID="eb38fd215d77c9573ef1e1ca9a9a7e1ac4fc553b5e11691ff33926e5721f8fc7" exitCode=0 Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.319318 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ec5-account-create-update-gdhjj" event={"ID":"4293b040-1fd4-4a5f-93e8-273d0d8509ac","Type":"ContainerDied","Data":"eb38fd215d77c9573ef1e1ca9a9a7e1ac4fc553b5e11691ff33926e5721f8fc7"} Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.321270 5049 generic.go:334] "Generic (PLEG): container finished" podID="280bd899-5f8e-49a6-9ebc-32acff3c72e6" containerID="8535904c57b170344be1f5cca8b6294c359e2cef513852c66525957473fdeee9" exitCode=0 Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.321332 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31eb-account-create-update-hr76v" event={"ID":"280bd899-5f8e-49a6-9ebc-32acff3c72e6","Type":"ContainerDied","Data":"8535904c57b170344be1f5cca8b6294c359e2cef513852c66525957473fdeee9"} Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.322645 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2bn6v" event={"ID":"2c03cd98-3721-4e1d-9a3c-5f0547f067ff","Type":"ContainerStarted","Data":"92e5c4ce46de5ac29d7137cbb8f086f72e1f13559dd4eb8c478556172acf4466"} Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.324336 5049 generic.go:334] "Generic (PLEG): container finished" podID="a00809fd-1407-4ccf-9cd5-09cc89ac751d" containerID="c81601cbfa3e2090ea7c52671baa125f1040d78ee414966c3e9c6e687d304585" exitCode=0 Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.324409 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-g6sl5" event={"ID":"a00809fd-1407-4ccf-9cd5-09cc89ac751d","Type":"ContainerDied","Data":"c81601cbfa3e2090ea7c52671baa125f1040d78ee414966c3e9c6e687d304585"} Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.326115 5049 generic.go:334] "Generic (PLEG): container finished" podID="47a64870-144b-4e50-a338-4a10e39333d2" containerID="7a0358aa2ff2d627a2df8479fcaa11e94076142f768005f18f7cdddd0ee9389c" exitCode=0 Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.326234 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6ncxg" event={"ID":"47a64870-144b-4e50-a338-4a10e39333d2","Type":"ContainerDied","Data":"7a0358aa2ff2d627a2df8479fcaa11e94076142f768005f18f7cdddd0ee9389c"} Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.328900 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.331353 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-f9s5t" event={"ID":"c3319e0c-70bb-4932-8408-e633569624c8","Type":"ContainerDied","Data":"3098fc1c7457786b42e3d87a38065ed4ded34db6726f0dce844453b25bc52072"} Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.331421 5049 scope.go:117] "RemoveContainer" containerID="d1fad14bb015e86fa3544160d4e6094edc0404b44feedf7b17ae0019ff753b42" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.333471 5049 generic.go:334] "Generic (PLEG): container finished" podID="19fc2237-8102-4dce-ba61-c6466948289d" containerID="5c3c67279357ddab656f96c0c019ac6843e4ad85d5a6014a0ba499293d60cbb2" exitCode=0 Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.333545 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hp2wf" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.334249 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g89zb" event={"ID":"19fc2237-8102-4dce-ba61-c6466948289d","Type":"ContainerDied","Data":"5c3c67279357ddab656f96c0c019ac6843e4ad85d5a6014a0ba499293d60cbb2"} Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.362437 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:29 crc kubenswrapper[5049]: E0127 17:16:29.362593 5049 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 17:16:29 crc kubenswrapper[5049]: E0127 17:16:29.363149 5049 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 17:16:29 crc kubenswrapper[5049]: E0127 17:16:29.363239 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift podName:0af4a67e-8714-4d41-ab32-7b2e526a0799 nodeName:}" failed. No retries permitted until 2026-01-27 17:16:31.363220099 +0000 UTC m=+1166.462193648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift") pod "swift-storage-0" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799") : configmap "swift-ring-files" not found Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.455417 5049 scope.go:117] "RemoveContainer" containerID="b5a2e5ba0b5a07d3bef1d7dfe0e0e0cf1eec4aad628e56a3f3a24bd46f764c1d" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.509854 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-hp2wf"] Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.521442 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-hp2wf"] Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.528327 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-f9s5t"] Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.543977 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-f9s5t"] Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.655570 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6f9393-ec35-40b2-a900-85f55e5f2bd6" path="/var/lib/kubelet/pods/6f6f9393-ec35-40b2-a900-85f55e5f2bd6/volumes" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.656056 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3319e0c-70bb-4932-8408-e633569624c8" path="/var/lib/kubelet/pods/c3319e0c-70bb-4932-8408-e633569624c8/volumes" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.958569 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ng4wf"] Jan 27 17:16:29 crc kubenswrapper[5049]: E0127 17:16:29.958912 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3319e0c-70bb-4932-8408-e633569624c8" containerName="init" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.958926 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3319e0c-70bb-4932-8408-e633569624c8" containerName="init" Jan 27 17:16:29 crc kubenswrapper[5049]: E0127 17:16:29.958945 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3319e0c-70bb-4932-8408-e633569624c8" containerName="dnsmasq-dns" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.958953 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3319e0c-70bb-4932-8408-e633569624c8" containerName="dnsmasq-dns" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.959121 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3319e0c-70bb-4932-8408-e633569624c8" containerName="dnsmasq-dns" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.959566 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:29 crc kubenswrapper[5049]: I0127 17:16:29.995965 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ng4wf"] Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.076073 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b9e608-225d-4568-9309-2228a13b66f7-operator-scripts\") pod \"glance-db-create-ng4wf\" (UID: \"17b9e608-225d-4568-9309-2228a13b66f7\") " pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.076260 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svn99\" (UniqueName: \"kubernetes.io/projected/17b9e608-225d-4568-9309-2228a13b66f7-kube-api-access-svn99\") pod \"glance-db-create-ng4wf\" (UID: \"17b9e608-225d-4568-9309-2228a13b66f7\") " pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.097285 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d1fd-account-create-update-c6g2d"] Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.098400 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.101088 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.121619 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d1fd-account-create-update-c6g2d"] Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.177549 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b9e608-225d-4568-9309-2228a13b66f7-operator-scripts\") pod \"glance-db-create-ng4wf\" (UID: \"17b9e608-225d-4568-9309-2228a13b66f7\") " pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.178230 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b9e608-225d-4568-9309-2228a13b66f7-operator-scripts\") pod \"glance-db-create-ng4wf\" (UID: \"17b9e608-225d-4568-9309-2228a13b66f7\") " pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.178381 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d64a07f1-44c1-4b82-9ba5-23580e61ddff-operator-scripts\") pod \"glance-d1fd-account-create-update-c6g2d\" (UID: \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\") " pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.178414 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz4np\" (UniqueName: \"kubernetes.io/projected/d64a07f1-44c1-4b82-9ba5-23580e61ddff-kube-api-access-cz4np\") pod \"glance-d1fd-account-create-update-c6g2d\" (UID: \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\") " pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.178519 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svn99\" (UniqueName: \"kubernetes.io/projected/17b9e608-225d-4568-9309-2228a13b66f7-kube-api-access-svn99\") pod \"glance-db-create-ng4wf\" (UID: \"17b9e608-225d-4568-9309-2228a13b66f7\") " pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.196339 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svn99\" (UniqueName: \"kubernetes.io/projected/17b9e608-225d-4568-9309-2228a13b66f7-kube-api-access-svn99\") pod \"glance-db-create-ng4wf\" (UID: \"17b9e608-225d-4568-9309-2228a13b66f7\") " pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.274273 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.280888 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d64a07f1-44c1-4b82-9ba5-23580e61ddff-operator-scripts\") pod \"glance-d1fd-account-create-update-c6g2d\" (UID: \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\") " pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.281749 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz4np\" (UniqueName: \"kubernetes.io/projected/d64a07f1-44c1-4b82-9ba5-23580e61ddff-kube-api-access-cz4np\") pod \"glance-d1fd-account-create-update-c6g2d\" (UID: \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\") " pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.281687 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d64a07f1-44c1-4b82-9ba5-23580e61ddff-operator-scripts\") pod \"glance-d1fd-account-create-update-c6g2d\" (UID: \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\") " pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.296802 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz4np\" (UniqueName: \"kubernetes.io/projected/d64a07f1-44c1-4b82-9ba5-23580e61ddff-kube-api-access-cz4np\") pod \"glance-d1fd-account-create-update-c6g2d\" (UID: \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\") " pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.343805 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6ncxg" event={"ID":"47a64870-144b-4e50-a338-4a10e39333d2","Type":"ContainerStarted","Data":"18a31fb37cd3e2dfb9da97f69a2ef54c149621f0405876f3cb7f425f48e3d989"} Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.344932 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.367419 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-6ncxg" podStartSLOduration=4.367399019 podStartE2EDuration="4.367399019s" podCreationTimestamp="2026-01-27 17:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:16:30.36500501 +0000 UTC m=+1165.463978569" watchObservedRunningTime="2026-01-27 17:16:30.367399019 +0000 UTC m=+1165.466372578" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.421077 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.884127 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.890047 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.896112 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.898771 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.942148 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ng4wf"] Jan 27 17:16:30 crc kubenswrapper[5049]: W0127 17:16:30.953008 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17b9e608_225d_4568_9309_2228a13b66f7.slice/crio-f94985cb427e8fe8f478f15b3b54f8d2b1d0ebe1423590c3524b3a8c0f7e7088 WatchSource:0}: Error finding container f94985cb427e8fe8f478f15b3b54f8d2b1d0ebe1423590c3524b3a8c0f7e7088: Status 404 returned error can't find the container with id f94985cb427e8fe8f478f15b3b54f8d2b1d0ebe1423590c3524b3a8c0f7e7088 Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.980838 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d1fd-account-create-update-c6g2d"] Jan 27 17:16:30 crc kubenswrapper[5049]: W0127 17:16:30.990370 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd64a07f1_44c1_4b82_9ba5_23580e61ddff.slice/crio-534cd4670fab4a9eddcab88b0be8541eef9ea6262cb995664269895399876cab WatchSource:0}: Error finding container 534cd4670fab4a9eddcab88b0be8541eef9ea6262cb995664269895399876cab: Status 404 returned error can't find the container with id 534cd4670fab4a9eddcab88b0be8541eef9ea6262cb995664269895399876cab Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997530 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00809fd-1407-4ccf-9cd5-09cc89ac751d-operator-scripts\") pod \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\" (UID: \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\") " Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997600 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4293b040-1fd4-4a5f-93e8-273d0d8509ac-operator-scripts\") pod \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\" (UID: \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\") " Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997654 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fc2237-8102-4dce-ba61-c6466948289d-operator-scripts\") pod \"19fc2237-8102-4dce-ba61-c6466948289d\" (UID: \"19fc2237-8102-4dce-ba61-c6466948289d\") " Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997712 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280bd899-5f8e-49a6-9ebc-32acff3c72e6-operator-scripts\") pod \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\" (UID: \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\") " Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997783 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv27p\" (UniqueName: \"kubernetes.io/projected/280bd899-5f8e-49a6-9ebc-32acff3c72e6-kube-api-access-xv27p\") pod \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\" (UID: \"280bd899-5f8e-49a6-9ebc-32acff3c72e6\") " Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997854 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj9cb\" (UniqueName: \"kubernetes.io/projected/a00809fd-1407-4ccf-9cd5-09cc89ac751d-kube-api-access-hj9cb\") pod \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\" (UID: \"a00809fd-1407-4ccf-9cd5-09cc89ac751d\") " Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997901 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gfql\" (UniqueName: \"kubernetes.io/projected/4293b040-1fd4-4a5f-93e8-273d0d8509ac-kube-api-access-4gfql\") pod \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\" (UID: \"4293b040-1fd4-4a5f-93e8-273d0d8509ac\") " Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997919 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a00809fd-1407-4ccf-9cd5-09cc89ac751d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a00809fd-1407-4ccf-9cd5-09cc89ac751d" (UID: "a00809fd-1407-4ccf-9cd5-09cc89ac751d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.997924 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7xg8\" (UniqueName: \"kubernetes.io/projected/19fc2237-8102-4dce-ba61-c6466948289d-kube-api-access-g7xg8\") pod \"19fc2237-8102-4dce-ba61-c6466948289d\" (UID: \"19fc2237-8102-4dce-ba61-c6466948289d\") " Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.998264 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a00809fd-1407-4ccf-9cd5-09cc89ac751d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.998302 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19fc2237-8102-4dce-ba61-c6466948289d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19fc2237-8102-4dce-ba61-c6466948289d" (UID: "19fc2237-8102-4dce-ba61-c6466948289d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.998402 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/280bd899-5f8e-49a6-9ebc-32acff3c72e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "280bd899-5f8e-49a6-9ebc-32acff3c72e6" (UID: "280bd899-5f8e-49a6-9ebc-32acff3c72e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:30 crc kubenswrapper[5049]: I0127 17:16:30.998667 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4293b040-1fd4-4a5f-93e8-273d0d8509ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4293b040-1fd4-4a5f-93e8-273d0d8509ac" (UID: "4293b040-1fd4-4a5f-93e8-273d0d8509ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.003461 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280bd899-5f8e-49a6-9ebc-32acff3c72e6-kube-api-access-xv27p" (OuterVolumeSpecName: "kube-api-access-xv27p") pod "280bd899-5f8e-49a6-9ebc-32acff3c72e6" (UID: "280bd899-5f8e-49a6-9ebc-32acff3c72e6"). InnerVolumeSpecName "kube-api-access-xv27p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.003546 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00809fd-1407-4ccf-9cd5-09cc89ac751d-kube-api-access-hj9cb" (OuterVolumeSpecName: "kube-api-access-hj9cb") pod "a00809fd-1407-4ccf-9cd5-09cc89ac751d" (UID: "a00809fd-1407-4ccf-9cd5-09cc89ac751d"). InnerVolumeSpecName "kube-api-access-hj9cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.003616 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19fc2237-8102-4dce-ba61-c6466948289d-kube-api-access-g7xg8" (OuterVolumeSpecName: "kube-api-access-g7xg8") pod "19fc2237-8102-4dce-ba61-c6466948289d" (UID: "19fc2237-8102-4dce-ba61-c6466948289d"). InnerVolumeSpecName "kube-api-access-g7xg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.003650 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4293b040-1fd4-4a5f-93e8-273d0d8509ac-kube-api-access-4gfql" (OuterVolumeSpecName: "kube-api-access-4gfql") pod "4293b040-1fd4-4a5f-93e8-273d0d8509ac" (UID: "4293b040-1fd4-4a5f-93e8-273d0d8509ac"). InnerVolumeSpecName "kube-api-access-4gfql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.099738 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gfql\" (UniqueName: \"kubernetes.io/projected/4293b040-1fd4-4a5f-93e8-273d0d8509ac-kube-api-access-4gfql\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.100051 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7xg8\" (UniqueName: \"kubernetes.io/projected/19fc2237-8102-4dce-ba61-c6466948289d-kube-api-access-g7xg8\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.100065 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4293b040-1fd4-4a5f-93e8-273d0d8509ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.100077 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19fc2237-8102-4dce-ba61-c6466948289d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.100088 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/280bd899-5f8e-49a6-9ebc-32acff3c72e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.100096 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv27p\" (UniqueName: \"kubernetes.io/projected/280bd899-5f8e-49a6-9ebc-32acff3c72e6-kube-api-access-xv27p\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.100105 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj9cb\" (UniqueName: \"kubernetes.io/projected/a00809fd-1407-4ccf-9cd5-09cc89ac751d-kube-api-access-hj9cb\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.353915 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-31eb-account-create-update-hr76v" event={"ID":"280bd899-5f8e-49a6-9ebc-32acff3c72e6","Type":"ContainerDied","Data":"171951aebf5e176c383da813e18478d734696455e07c78ac8e25d1aeccc4c43f"} Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.353956 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171951aebf5e176c383da813e18478d734696455e07c78ac8e25d1aeccc4c43f" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.354006 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-31eb-account-create-update-hr76v" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.362889 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-g6sl5" event={"ID":"a00809fd-1407-4ccf-9cd5-09cc89ac751d","Type":"ContainerDied","Data":"55e22e27507a988143c1a8a08b46374a2ae356680ca12e9b00db532496793493"} Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.362934 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55e22e27507a988143c1a8a08b46374a2ae356680ca12e9b00db532496793493" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.362937 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-g6sl5" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.364507 5049 generic.go:334] "Generic (PLEG): container finished" podID="d64a07f1-44c1-4b82-9ba5-23580e61ddff" containerID="eb7bb6bdc696d16a9391e8b7d03b5fc3378a5cabbebba4c36fb5b1740306e76c" exitCode=0 Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.364543 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1fd-account-create-update-c6g2d" event={"ID":"d64a07f1-44c1-4b82-9ba5-23580e61ddff","Type":"ContainerDied","Data":"eb7bb6bdc696d16a9391e8b7d03b5fc3378a5cabbebba4c36fb5b1740306e76c"} Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.364592 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1fd-account-create-update-c6g2d" event={"ID":"d64a07f1-44c1-4b82-9ba5-23580e61ddff","Type":"ContainerStarted","Data":"534cd4670fab4a9eddcab88b0be8541eef9ea6262cb995664269895399876cab"} Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.368078 5049 generic.go:334] "Generic (PLEG): container finished" podID="17b9e608-225d-4568-9309-2228a13b66f7" containerID="e149f916270c7d8e961b7164f68aa366540d8c58dc14d2240d6b3bb65c9a6dd5" exitCode=0 Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.368146 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ng4wf" event={"ID":"17b9e608-225d-4568-9309-2228a13b66f7","Type":"ContainerDied","Data":"e149f916270c7d8e961b7164f68aa366540d8c58dc14d2240d6b3bb65c9a6dd5"} Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.368190 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ng4wf" event={"ID":"17b9e608-225d-4568-9309-2228a13b66f7","Type":"ContainerStarted","Data":"f94985cb427e8fe8f478f15b3b54f8d2b1d0ebe1423590c3524b3a8c0f7e7088"} Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.369937 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-g89zb" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.369964 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-g89zb" event={"ID":"19fc2237-8102-4dce-ba61-c6466948289d","Type":"ContainerDied","Data":"2a2222d4e79e2834e382b0593a9506ea541d7b478aed2b157c313b06c8e08ef4"} Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.370000 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a2222d4e79e2834e382b0593a9506ea541d7b478aed2b157c313b06c8e08ef4" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.372814 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ec5-account-create-update-gdhjj" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.373462 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ec5-account-create-update-gdhjj" event={"ID":"4293b040-1fd4-4a5f-93e8-273d0d8509ac","Type":"ContainerDied","Data":"f55e6276f9e6314b038a82e912cd00f0797522e9e190add1bbcff1424f6671d5"} Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.373492 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55e6276f9e6314b038a82e912cd00f0797522e9e190add1bbcff1424f6671d5" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.405899 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:31 crc kubenswrapper[5049]: E0127 17:16:31.406119 5049 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 17:16:31 crc kubenswrapper[5049]: E0127 17:16:31.406142 5049 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 17:16:31 crc kubenswrapper[5049]: E0127 17:16:31.406194 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift podName:0af4a67e-8714-4d41-ab32-7b2e526a0799 nodeName:}" failed. No retries permitted until 2026-01-27 17:16:35.406177335 +0000 UTC m=+1170.505150884 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift") pod "swift-storage-0" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799") : configmap "swift-ring-files" not found Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.918766 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pdn59"] Jan 27 17:16:31 crc kubenswrapper[5049]: E0127 17:16:31.919124 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280bd899-5f8e-49a6-9ebc-32acff3c72e6" containerName="mariadb-account-create-update" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919135 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="280bd899-5f8e-49a6-9ebc-32acff3c72e6" containerName="mariadb-account-create-update" Jan 27 17:16:31 crc kubenswrapper[5049]: E0127 17:16:31.919155 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19fc2237-8102-4dce-ba61-c6466948289d" containerName="mariadb-database-create" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919163 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="19fc2237-8102-4dce-ba61-c6466948289d" containerName="mariadb-database-create" Jan 27 17:16:31 crc kubenswrapper[5049]: E0127 17:16:31.919190 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4293b040-1fd4-4a5f-93e8-273d0d8509ac" containerName="mariadb-account-create-update" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919198 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4293b040-1fd4-4a5f-93e8-273d0d8509ac" containerName="mariadb-account-create-update" Jan 27 17:16:31 crc kubenswrapper[5049]: E0127 17:16:31.919208 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00809fd-1407-4ccf-9cd5-09cc89ac751d" containerName="mariadb-database-create" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919215 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00809fd-1407-4ccf-9cd5-09cc89ac751d" containerName="mariadb-database-create" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919418 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="280bd899-5f8e-49a6-9ebc-32acff3c72e6" containerName="mariadb-account-create-update" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919433 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00809fd-1407-4ccf-9cd5-09cc89ac751d" containerName="mariadb-database-create" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919443 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="19fc2237-8102-4dce-ba61-c6466948289d" containerName="mariadb-database-create" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919462 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4293b040-1fd4-4a5f-93e8-273d0d8509ac" containerName="mariadb-account-create-update" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.919977 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.921940 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 17:16:31 crc kubenswrapper[5049]: I0127 17:16:31.928272 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pdn59"] Jan 27 17:16:32 crc kubenswrapper[5049]: I0127 17:16:32.022501 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ebc5db-d0db-4209-b469-39ce600dfb97-operator-scripts\") pod \"root-account-create-update-pdn59\" (UID: \"94ebc5db-d0db-4209-b469-39ce600dfb97\") " pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:32 crc kubenswrapper[5049]: I0127 17:16:32.022694 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6zbz\" (UniqueName: \"kubernetes.io/projected/94ebc5db-d0db-4209-b469-39ce600dfb97-kube-api-access-v6zbz\") pod \"root-account-create-update-pdn59\" (UID: \"94ebc5db-d0db-4209-b469-39ce600dfb97\") " pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:32 crc kubenswrapper[5049]: I0127 17:16:32.125027 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ebc5db-d0db-4209-b469-39ce600dfb97-operator-scripts\") pod \"root-account-create-update-pdn59\" (UID: \"94ebc5db-d0db-4209-b469-39ce600dfb97\") " pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:32 crc kubenswrapper[5049]: I0127 17:16:32.125329 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6zbz\" (UniqueName: \"kubernetes.io/projected/94ebc5db-d0db-4209-b469-39ce600dfb97-kube-api-access-v6zbz\") pod \"root-account-create-update-pdn59\" (UID: \"94ebc5db-d0db-4209-b469-39ce600dfb97\") " pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:32 crc kubenswrapper[5049]: I0127 17:16:32.126166 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ebc5db-d0db-4209-b469-39ce600dfb97-operator-scripts\") pod \"root-account-create-update-pdn59\" (UID: \"94ebc5db-d0db-4209-b469-39ce600dfb97\") " pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:32 crc kubenswrapper[5049]: I0127 17:16:32.145992 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6zbz\" (UniqueName: \"kubernetes.io/projected/94ebc5db-d0db-4209-b469-39ce600dfb97-kube-api-access-v6zbz\") pod \"root-account-create-update-pdn59\" (UID: \"94ebc5db-d0db-4209-b469-39ce600dfb97\") " pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:32 crc kubenswrapper[5049]: I0127 17:16:32.249603 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.268208 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.341225 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.343811 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d64a07f1-44c1-4b82-9ba5-23580e61ddff-operator-scripts\") pod \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\" (UID: \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\") " Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.343883 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz4np\" (UniqueName: \"kubernetes.io/projected/d64a07f1-44c1-4b82-9ba5-23580e61ddff-kube-api-access-cz4np\") pod \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\" (UID: \"d64a07f1-44c1-4b82-9ba5-23580e61ddff\") " Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.344562 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d64a07f1-44c1-4b82-9ba5-23580e61ddff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d64a07f1-44c1-4b82-9ba5-23580e61ddff" (UID: "d64a07f1-44c1-4b82-9ba5-23580e61ddff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.356964 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d64a07f1-44c1-4b82-9ba5-23580e61ddff-kube-api-access-cz4np" (OuterVolumeSpecName: "kube-api-access-cz4np") pod "d64a07f1-44c1-4b82-9ba5-23580e61ddff" (UID: "d64a07f1-44c1-4b82-9ba5-23580e61ddff"). InnerVolumeSpecName "kube-api-access-cz4np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.390976 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ng4wf" event={"ID":"17b9e608-225d-4568-9309-2228a13b66f7","Type":"ContainerDied","Data":"f94985cb427e8fe8f478f15b3b54f8d2b1d0ebe1423590c3524b3a8c0f7e7088"} Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.391009 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94985cb427e8fe8f478f15b3b54f8d2b1d0ebe1423590c3524b3a8c0f7e7088" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.391054 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ng4wf" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.392463 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1fd-account-create-update-c6g2d" event={"ID":"d64a07f1-44c1-4b82-9ba5-23580e61ddff","Type":"ContainerDied","Data":"534cd4670fab4a9eddcab88b0be8541eef9ea6262cb995664269895399876cab"} Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.392482 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534cd4670fab4a9eddcab88b0be8541eef9ea6262cb995664269895399876cab" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.392510 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1fd-account-create-update-c6g2d" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.445448 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svn99\" (UniqueName: \"kubernetes.io/projected/17b9e608-225d-4568-9309-2228a13b66f7-kube-api-access-svn99\") pod \"17b9e608-225d-4568-9309-2228a13b66f7\" (UID: \"17b9e608-225d-4568-9309-2228a13b66f7\") " Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.445499 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b9e608-225d-4568-9309-2228a13b66f7-operator-scripts\") pod \"17b9e608-225d-4568-9309-2228a13b66f7\" (UID: \"17b9e608-225d-4568-9309-2228a13b66f7\") " Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.446061 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b9e608-225d-4568-9309-2228a13b66f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17b9e608-225d-4568-9309-2228a13b66f7" (UID: "17b9e608-225d-4568-9309-2228a13b66f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.447871 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz4np\" (UniqueName: \"kubernetes.io/projected/d64a07f1-44c1-4b82-9ba5-23580e61ddff-kube-api-access-cz4np\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.447911 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17b9e608-225d-4568-9309-2228a13b66f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.447925 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d64a07f1-44c1-4b82-9ba5-23580e61ddff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.463247 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b9e608-225d-4568-9309-2228a13b66f7-kube-api-access-svn99" (OuterVolumeSpecName: "kube-api-access-svn99") pod "17b9e608-225d-4568-9309-2228a13b66f7" (UID: "17b9e608-225d-4568-9309-2228a13b66f7"). InnerVolumeSpecName "kube-api-access-svn99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.549764 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svn99\" (UniqueName: \"kubernetes.io/projected/17b9e608-225d-4568-9309-2228a13b66f7-kube-api-access-svn99\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:33 crc kubenswrapper[5049]: I0127 17:16:33.622375 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pdn59"] Jan 27 17:16:33 crc kubenswrapper[5049]: W0127 17:16:33.631149 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94ebc5db_d0db_4209_b469_39ce600dfb97.slice/crio-3a87f61d1199d91ad66a5980f7205a52abf4fa0dd38eb6ff5136bb1ecb0f302e WatchSource:0}: Error finding container 3a87f61d1199d91ad66a5980f7205a52abf4fa0dd38eb6ff5136bb1ecb0f302e: Status 404 returned error can't find the container with id 3a87f61d1199d91ad66a5980f7205a52abf4fa0dd38eb6ff5136bb1ecb0f302e Jan 27 17:16:34 crc kubenswrapper[5049]: I0127 17:16:34.405047 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2bn6v" event={"ID":"2c03cd98-3721-4e1d-9a3c-5f0547f067ff","Type":"ContainerStarted","Data":"7ff2b20680587ac503ca41e9a990c3073fbbedfa45194f1c414afb82e3c9c863"} Jan 27 17:16:34 crc kubenswrapper[5049]: I0127 17:16:34.406923 5049 generic.go:334] "Generic (PLEG): container finished" podID="94ebc5db-d0db-4209-b469-39ce600dfb97" containerID="fb5db9a00e5113f7d25d292ca3886b1f41516fc87005b0634e2553727856eb8a" exitCode=0 Jan 27 17:16:34 crc kubenswrapper[5049]: I0127 17:16:34.406984 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pdn59" event={"ID":"94ebc5db-d0db-4209-b469-39ce600dfb97","Type":"ContainerDied","Data":"fb5db9a00e5113f7d25d292ca3886b1f41516fc87005b0634e2553727856eb8a"} Jan 27 17:16:34 crc kubenswrapper[5049]: I0127 17:16:34.407032 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pdn59" event={"ID":"94ebc5db-d0db-4209-b469-39ce600dfb97","Type":"ContainerStarted","Data":"3a87f61d1199d91ad66a5980f7205a52abf4fa0dd38eb6ff5136bb1ecb0f302e"} Jan 27 17:16:34 crc kubenswrapper[5049]: I0127 17:16:34.431453 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2bn6v" podStartSLOduration=2.35875863 podStartE2EDuration="6.431428812s" podCreationTimestamp="2026-01-27 17:16:28 +0000 UTC" firstStartedPulling="2026-01-27 17:16:29.110565463 +0000 UTC m=+1164.209539022" lastFinishedPulling="2026-01-27 17:16:33.183235655 +0000 UTC m=+1168.282209204" observedRunningTime="2026-01-27 17:16:34.420231209 +0000 UTC m=+1169.519204798" watchObservedRunningTime="2026-01-27 17:16:34.431428812 +0000 UTC m=+1169.530402401" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.395836 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dhqbn"] Jan 27 17:16:35 crc kubenswrapper[5049]: E0127 17:16:35.396600 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b9e608-225d-4568-9309-2228a13b66f7" containerName="mariadb-database-create" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.396631 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b9e608-225d-4568-9309-2228a13b66f7" containerName="mariadb-database-create" Jan 27 17:16:35 crc kubenswrapper[5049]: E0127 17:16:35.396696 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d64a07f1-44c1-4b82-9ba5-23580e61ddff" containerName="mariadb-account-create-update" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.396709 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d64a07f1-44c1-4b82-9ba5-23580e61ddff" containerName="mariadb-account-create-update" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.396978 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d64a07f1-44c1-4b82-9ba5-23580e61ddff" containerName="mariadb-account-create-update" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.396998 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b9e608-225d-4568-9309-2228a13b66f7" containerName="mariadb-database-create" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.397598 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.402227 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vv9mx" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.402321 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.411297 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dhqbn"] Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.485004 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.485112 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-combined-ca-bundle\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.485160 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-db-sync-config-data\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.485203 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvf4w\" (UniqueName: \"kubernetes.io/projected/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-kube-api-access-qvf4w\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.485227 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-config-data\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: E0127 17:16:35.485656 5049 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 17:16:35 crc kubenswrapper[5049]: E0127 17:16:35.485729 5049 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 17:16:35 crc kubenswrapper[5049]: E0127 17:16:35.485789 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift podName:0af4a67e-8714-4d41-ab32-7b2e526a0799 nodeName:}" failed. No retries permitted until 2026-01-27 17:16:43.485769327 +0000 UTC m=+1178.584742886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift") pod "swift-storage-0" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799") : configmap "swift-ring-files" not found Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.587394 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-combined-ca-bundle\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.587467 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-db-sync-config-data\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.587528 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvf4w\" (UniqueName: \"kubernetes.io/projected/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-kube-api-access-qvf4w\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.587567 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-config-data\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.599808 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-combined-ca-bundle\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.602052 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-config-data\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.603147 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-db-sync-config-data\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.618937 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvf4w\" (UniqueName: \"kubernetes.io/projected/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-kube-api-access-qvf4w\") pod \"glance-db-sync-dhqbn\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.715871 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dhqbn" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.832383 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.896795 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6zbz\" (UniqueName: \"kubernetes.io/projected/94ebc5db-d0db-4209-b469-39ce600dfb97-kube-api-access-v6zbz\") pod \"94ebc5db-d0db-4209-b469-39ce600dfb97\" (UID: \"94ebc5db-d0db-4209-b469-39ce600dfb97\") " Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.896946 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ebc5db-d0db-4209-b469-39ce600dfb97-operator-scripts\") pod \"94ebc5db-d0db-4209-b469-39ce600dfb97\" (UID: \"94ebc5db-d0db-4209-b469-39ce600dfb97\") " Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.897625 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ebc5db-d0db-4209-b469-39ce600dfb97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94ebc5db-d0db-4209-b469-39ce600dfb97" (UID: "94ebc5db-d0db-4209-b469-39ce600dfb97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.903895 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ebc5db-d0db-4209-b469-39ce600dfb97-kube-api-access-v6zbz" (OuterVolumeSpecName: "kube-api-access-v6zbz") pod "94ebc5db-d0db-4209-b469-39ce600dfb97" (UID: "94ebc5db-d0db-4209-b469-39ce600dfb97"). InnerVolumeSpecName "kube-api-access-v6zbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.998835 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6zbz\" (UniqueName: \"kubernetes.io/projected/94ebc5db-d0db-4209-b469-39ce600dfb97-kube-api-access-v6zbz\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:35 crc kubenswrapper[5049]: I0127 17:16:35.998867 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ebc5db-d0db-4209-b469-39ce600dfb97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:36 crc kubenswrapper[5049]: I0127 17:16:36.309991 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dhqbn"] Jan 27 17:16:36 crc kubenswrapper[5049]: W0127 17:16:36.311167 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ffa9f84_a923_4ded_8dc7_a5b69acd6464.slice/crio-04dc73829ca16c0d856001527b7e86f1c18d0b5cdeaf9411efc995d8dfdea9df WatchSource:0}: Error finding container 04dc73829ca16c0d856001527b7e86f1c18d0b5cdeaf9411efc995d8dfdea9df: Status 404 returned error can't find the container with id 04dc73829ca16c0d856001527b7e86f1c18d0b5cdeaf9411efc995d8dfdea9df Jan 27 17:16:36 crc kubenswrapper[5049]: I0127 17:16:36.428469 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pdn59" Jan 27 17:16:36 crc kubenswrapper[5049]: I0127 17:16:36.428489 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pdn59" event={"ID":"94ebc5db-d0db-4209-b469-39ce600dfb97","Type":"ContainerDied","Data":"3a87f61d1199d91ad66a5980f7205a52abf4fa0dd38eb6ff5136bb1ecb0f302e"} Jan 27 17:16:36 crc kubenswrapper[5049]: I0127 17:16:36.428522 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a87f61d1199d91ad66a5980f7205a52abf4fa0dd38eb6ff5136bb1ecb0f302e" Jan 27 17:16:36 crc kubenswrapper[5049]: I0127 17:16:36.429821 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dhqbn" event={"ID":"9ffa9f84-a923-4ded-8dc7-a5b69acd6464","Type":"ContainerStarted","Data":"04dc73829ca16c0d856001527b7e86f1c18d0b5cdeaf9411efc995d8dfdea9df"} Jan 27 17:16:36 crc kubenswrapper[5049]: I0127 17:16:36.817543 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:16:36 crc kubenswrapper[5049]: I0127 17:16:36.890997 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bjbp"] Jan 27 17:16:36 crc kubenswrapper[5049]: I0127 17:16:36.891288 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" podUID="c2e801e9-4180-412c-83c2-c2871b506588" containerName="dnsmasq-dns" containerID="cri-o://a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e" gracePeriod=10 Jan 27 17:16:38 crc kubenswrapper[5049]: I0127 17:16:38.340029 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pdn59"] Jan 27 17:16:38 crc kubenswrapper[5049]: I0127 17:16:38.347752 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pdn59"] Jan 27 17:16:38 crc kubenswrapper[5049]: I0127 17:16:38.727476 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" podUID="c2e801e9-4180-412c-83c2-c2871b506588" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 27 17:16:39 crc kubenswrapper[5049]: I0127 17:16:39.661977 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ebc5db-d0db-4209-b469-39ce600dfb97" path="/var/lib/kubelet/pods/94ebc5db-d0db-4209-b469-39ce600dfb97/volumes" Jan 27 17:16:40 crc kubenswrapper[5049]: I0127 17:16:40.083172 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.411975 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.477356 5049 generic.go:334] "Generic (PLEG): container finished" podID="c2e801e9-4180-412c-83c2-c2871b506588" containerID="a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e" exitCode=0 Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.477451 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" event={"ID":"c2e801e9-4180-412c-83c2-c2871b506588","Type":"ContainerDied","Data":"a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e"} Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.477495 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" event={"ID":"c2e801e9-4180-412c-83c2-c2871b506588","Type":"ContainerDied","Data":"4609e644de365c65f7f3d8901ead0893e53e63d7af2e27fac16c691e783236e4"} Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.477518 5049 scope.go:117] "RemoveContainer" containerID="a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.477521 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7bjbp" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.494309 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-sb\") pod \"c2e801e9-4180-412c-83c2-c2871b506588\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.494359 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-nb\") pod \"c2e801e9-4180-412c-83c2-c2871b506588\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.494417 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-dns-svc\") pod \"c2e801e9-4180-412c-83c2-c2871b506588\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.494453 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhkk4\" (UniqueName: \"kubernetes.io/projected/c2e801e9-4180-412c-83c2-c2871b506588-kube-api-access-dhkk4\") pod \"c2e801e9-4180-412c-83c2-c2871b506588\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.494580 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-config\") pod \"c2e801e9-4180-412c-83c2-c2871b506588\" (UID: \"c2e801e9-4180-412c-83c2-c2871b506588\") " Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.499356 5049 scope.go:117] "RemoveContainer" containerID="585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.499637 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e801e9-4180-412c-83c2-c2871b506588-kube-api-access-dhkk4" (OuterVolumeSpecName: "kube-api-access-dhkk4") pod "c2e801e9-4180-412c-83c2-c2871b506588" (UID: "c2e801e9-4180-412c-83c2-c2871b506588"). InnerVolumeSpecName "kube-api-access-dhkk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.539002 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-config" (OuterVolumeSpecName: "config") pod "c2e801e9-4180-412c-83c2-c2871b506588" (UID: "c2e801e9-4180-412c-83c2-c2871b506588"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.539020 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2e801e9-4180-412c-83c2-c2871b506588" (UID: "c2e801e9-4180-412c-83c2-c2871b506588"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.552550 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2e801e9-4180-412c-83c2-c2871b506588" (UID: "c2e801e9-4180-412c-83c2-c2871b506588"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.553884 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2e801e9-4180-412c-83c2-c2871b506588" (UID: "c2e801e9-4180-412c-83c2-c2871b506588"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.596550 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.596578 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.596588 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.596991 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhkk4\" (UniqueName: \"kubernetes.io/projected/c2e801e9-4180-412c-83c2-c2871b506588-kube-api-access-dhkk4\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.597059 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e801e9-4180-412c-83c2-c2871b506588-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.634650 5049 scope.go:117] "RemoveContainer" containerID="a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e" Jan 27 17:16:41 crc kubenswrapper[5049]: E0127 17:16:41.635117 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e\": container with ID starting with a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e not found: ID does not exist" containerID="a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.635187 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e"} err="failed to get container status \"a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e\": rpc error: code = NotFound desc = could not find container \"a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e\": container with ID starting with a754ffe3df0b9112bf0fad75e1f26b0f4fe0656c2dfbea5dc67947d9a5bac52e not found: ID does not exist" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.635229 5049 scope.go:117] "RemoveContainer" containerID="585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a" Jan 27 17:16:41 crc kubenswrapper[5049]: E0127 17:16:41.635548 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a\": container with ID starting with 585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a not found: ID does not exist" containerID="585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.635598 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a"} err="failed to get container status \"585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a\": rpc error: code = NotFound desc = could not find container \"585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a\": container with ID starting with 585d2a95a97ce61a2a63e2251f456e50e86e38b810875550a8339e8c847ba36a not found: ID does not exist" Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.802361 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bjbp"] Jan 27 17:16:41 crc kubenswrapper[5049]: I0127 17:16:41.810191 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7bjbp"] Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.353141 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zr447"] Jan 27 17:16:43 crc kubenswrapper[5049]: E0127 17:16:43.353879 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e801e9-4180-412c-83c2-c2871b506588" containerName="init" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.353898 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e801e9-4180-412c-83c2-c2871b506588" containerName="init" Jan 27 17:16:43 crc kubenswrapper[5049]: E0127 17:16:43.353932 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e801e9-4180-412c-83c2-c2871b506588" containerName="dnsmasq-dns" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.353945 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e801e9-4180-412c-83c2-c2871b506588" containerName="dnsmasq-dns" Jan 27 17:16:43 crc kubenswrapper[5049]: E0127 17:16:43.353982 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ebc5db-d0db-4209-b469-39ce600dfb97" containerName="mariadb-account-create-update" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.353991 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ebc5db-d0db-4209-b469-39ce600dfb97" containerName="mariadb-account-create-update" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.354235 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e801e9-4180-412c-83c2-c2871b506588" containerName="dnsmasq-dns" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.354259 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ebc5db-d0db-4209-b469-39ce600dfb97" containerName="mariadb-account-create-update" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.355014 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zr447" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.357143 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.366724 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zr447"] Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.427623 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b09562c-f4c4-425a-a400-113e913a8031-operator-scripts\") pod \"root-account-create-update-zr447\" (UID: \"7b09562c-f4c4-425a-a400-113e913a8031\") " pod="openstack/root-account-create-update-zr447" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.427786 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4n5m\" (UniqueName: \"kubernetes.io/projected/7b09562c-f4c4-425a-a400-113e913a8031-kube-api-access-x4n5m\") pod \"root-account-create-update-zr447\" (UID: \"7b09562c-f4c4-425a-a400-113e913a8031\") " pod="openstack/root-account-create-update-zr447" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.528542 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b09562c-f4c4-425a-a400-113e913a8031-operator-scripts\") pod \"root-account-create-update-zr447\" (UID: \"7b09562c-f4c4-425a-a400-113e913a8031\") " pod="openstack/root-account-create-update-zr447" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.528623 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.528707 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4n5m\" (UniqueName: \"kubernetes.io/projected/7b09562c-f4c4-425a-a400-113e913a8031-kube-api-access-x4n5m\") pod \"root-account-create-update-zr447\" (UID: \"7b09562c-f4c4-425a-a400-113e913a8031\") " pod="openstack/root-account-create-update-zr447" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.529764 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b09562c-f4c4-425a-a400-113e913a8031-operator-scripts\") pod \"root-account-create-update-zr447\" (UID: \"7b09562c-f4c4-425a-a400-113e913a8031\") " pod="openstack/root-account-create-update-zr447" Jan 27 17:16:43 crc kubenswrapper[5049]: E0127 17:16:43.529869 5049 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 17:16:43 crc kubenswrapper[5049]: E0127 17:16:43.529887 5049 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 17:16:43 crc kubenswrapper[5049]: E0127 17:16:43.529927 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift podName:0af4a67e-8714-4d41-ab32-7b2e526a0799 nodeName:}" failed. No retries permitted until 2026-01-27 17:16:59.529913643 +0000 UTC m=+1194.628887192 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift") pod "swift-storage-0" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799") : configmap "swift-ring-files" not found Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.571870 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4n5m\" (UniqueName: \"kubernetes.io/projected/7b09562c-f4c4-425a-a400-113e913a8031-kube-api-access-x4n5m\") pod \"root-account-create-update-zr447\" (UID: \"7b09562c-f4c4-425a-a400-113e913a8031\") " pod="openstack/root-account-create-update-zr447" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.657766 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e801e9-4180-412c-83c2-c2871b506588" path="/var/lib/kubelet/pods/c2e801e9-4180-412c-83c2-c2871b506588/volumes" Jan 27 17:16:43 crc kubenswrapper[5049]: I0127 17:16:43.691399 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zr447" Jan 27 17:16:44 crc kubenswrapper[5049]: I0127 17:16:44.162647 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zr447"] Jan 27 17:16:44 crc kubenswrapper[5049]: I0127 17:16:44.506167 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zr447" event={"ID":"7b09562c-f4c4-425a-a400-113e913a8031","Type":"ContainerStarted","Data":"1b6f6c5a2dbd10576d39cf66ace86b732d5a6d3d4205d639f93906059ad5f4f4"} Jan 27 17:16:44 crc kubenswrapper[5049]: I0127 17:16:44.506535 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zr447" event={"ID":"7b09562c-f4c4-425a-a400-113e913a8031","Type":"ContainerStarted","Data":"02ab73bd66c63371755f918664c31562b4708866193297b9852f935d823eee65"} Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.202030 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pv2qx" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" probeResult="failure" output=< Jan 27 17:16:45 crc kubenswrapper[5049]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 17:16:45 crc kubenswrapper[5049]: > Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.208857 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.209898 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.444007 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pv2qx-config-bbr8j"] Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.445943 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.447908 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.452236 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pv2qx-config-bbr8j"] Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.519291 5049 generic.go:334] "Generic (PLEG): container finished" podID="2c03cd98-3721-4e1d-9a3c-5f0547f067ff" containerID="7ff2b20680587ac503ca41e9a990c3073fbbedfa45194f1c414afb82e3c9c863" exitCode=0 Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.519354 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2bn6v" event={"ID":"2c03cd98-3721-4e1d-9a3c-5f0547f067ff","Type":"ContainerDied","Data":"7ff2b20680587ac503ca41e9a990c3073fbbedfa45194f1c414afb82e3c9c863"} Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.523463 5049 generic.go:334] "Generic (PLEG): container finished" podID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerID="a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b" exitCode=0 Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.523556 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbb24b4b-dfbd-431f-8244-098c40f7c24f","Type":"ContainerDied","Data":"a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b"} Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.526929 5049 generic.go:334] "Generic (PLEG): container finished" podID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerID="a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283" exitCode=0 Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.527020 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62ffcfe9-3e93-48ee-8d03-9b653d1bfede","Type":"ContainerDied","Data":"a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283"} Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.528247 5049 generic.go:334] "Generic (PLEG): container finished" podID="7b09562c-f4c4-425a-a400-113e913a8031" containerID="1b6f6c5a2dbd10576d39cf66ace86b732d5a6d3d4205d639f93906059ad5f4f4" exitCode=0 Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.528341 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zr447" event={"ID":"7b09562c-f4c4-425a-a400-113e913a8031","Type":"ContainerDied","Data":"1b6f6c5a2dbd10576d39cf66ace86b732d5a6d3d4205d639f93906059ad5f4f4"} Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.564324 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-scripts\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.564376 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-log-ovn\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.564504 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-additional-scripts\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.564558 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.564722 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwjjb\" (UniqueName: \"kubernetes.io/projected/b8d7a3f1-63b2-41d0-8024-c1634cde5870-kube-api-access-wwjjb\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.564918 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run-ovn\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667137 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-scripts\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667206 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-log-ovn\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667242 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-additional-scripts\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667271 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667313 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwjjb\" (UniqueName: \"kubernetes.io/projected/b8d7a3f1-63b2-41d0-8024-c1634cde5870-kube-api-access-wwjjb\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667381 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run-ovn\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667551 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-log-ovn\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667606 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run-ovn\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.667627 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.668314 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-additional-scripts\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.669987 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-scripts\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.705262 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwjjb\" (UniqueName: \"kubernetes.io/projected/b8d7a3f1-63b2-41d0-8024-c1634cde5870-kube-api-access-wwjjb\") pod \"ovn-controller-pv2qx-config-bbr8j\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:45 crc kubenswrapper[5049]: I0127 17:16:45.768655 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:50 crc kubenswrapper[5049]: I0127 17:16:50.130755 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pv2qx" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" probeResult="failure" output=< Jan 27 17:16:50 crc kubenswrapper[5049]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 17:16:50 crc kubenswrapper[5049]: > Jan 27 17:16:53 crc kubenswrapper[5049]: I0127 17:16:53.901875 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zr447" Jan 27 17:16:53 crc kubenswrapper[5049]: I0127 17:16:53.924134 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013120 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-ring-data-devices\") pod \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013195 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-combined-ca-bundle\") pod \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013233 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-dispersionconf\") pod \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013302 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-swiftconf\") pod \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013349 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-etc-swift\") pod \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013382 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-scripts\") pod \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013409 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk2d8\" (UniqueName: \"kubernetes.io/projected/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-kube-api-access-pk2d8\") pod \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\" (UID: \"2c03cd98-3721-4e1d-9a3c-5f0547f067ff\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013502 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4n5m\" (UniqueName: \"kubernetes.io/projected/7b09562c-f4c4-425a-a400-113e913a8031-kube-api-access-x4n5m\") pod \"7b09562c-f4c4-425a-a400-113e913a8031\" (UID: \"7b09562c-f4c4-425a-a400-113e913a8031\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.013532 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b09562c-f4c4-425a-a400-113e913a8031-operator-scripts\") pod \"7b09562c-f4c4-425a-a400-113e913a8031\" (UID: \"7b09562c-f4c4-425a-a400-113e913a8031\") " Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.014314 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2c03cd98-3721-4e1d-9a3c-5f0547f067ff" (UID: "2c03cd98-3721-4e1d-9a3c-5f0547f067ff"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.014430 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b09562c-f4c4-425a-a400-113e913a8031-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b09562c-f4c4-425a-a400-113e913a8031" (UID: "7b09562c-f4c4-425a-a400-113e913a8031"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.015643 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2c03cd98-3721-4e1d-9a3c-5f0547f067ff" (UID: "2c03cd98-3721-4e1d-9a3c-5f0547f067ff"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.036637 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b09562c-f4c4-425a-a400-113e913a8031-kube-api-access-x4n5m" (OuterVolumeSpecName: "kube-api-access-x4n5m") pod "7b09562c-f4c4-425a-a400-113e913a8031" (UID: "7b09562c-f4c4-425a-a400-113e913a8031"). InnerVolumeSpecName "kube-api-access-x4n5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.036813 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2c03cd98-3721-4e1d-9a3c-5f0547f067ff" (UID: "2c03cd98-3721-4e1d-9a3c-5f0547f067ff"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.039474 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2c03cd98-3721-4e1d-9a3c-5f0547f067ff" (UID: "2c03cd98-3721-4e1d-9a3c-5f0547f067ff"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.039980 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-scripts" (OuterVolumeSpecName: "scripts") pod "2c03cd98-3721-4e1d-9a3c-5f0547f067ff" (UID: "2c03cd98-3721-4e1d-9a3c-5f0547f067ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.042423 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c03cd98-3721-4e1d-9a3c-5f0547f067ff" (UID: "2c03cd98-3721-4e1d-9a3c-5f0547f067ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.043198 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-kube-api-access-pk2d8" (OuterVolumeSpecName: "kube-api-access-pk2d8") pod "2c03cd98-3721-4e1d-9a3c-5f0547f067ff" (UID: "2c03cd98-3721-4e1d-9a3c-5f0547f067ff"). InnerVolumeSpecName "kube-api-access-pk2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115158 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4n5m\" (UniqueName: \"kubernetes.io/projected/7b09562c-f4c4-425a-a400-113e913a8031-kube-api-access-x4n5m\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115198 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b09562c-f4c4-425a-a400-113e913a8031-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115208 5049 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115216 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115225 5049 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115234 5049 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115242 5049 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115250 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.115258 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk2d8\" (UniqueName: \"kubernetes.io/projected/2c03cd98-3721-4e1d-9a3c-5f0547f067ff-kube-api-access-pk2d8\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.179991 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pv2qx-config-bbr8j"] Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.598646 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbb24b4b-dfbd-431f-8244-098c40f7c24f","Type":"ContainerStarted","Data":"3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462"} Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.599707 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.601168 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dhqbn" event={"ID":"9ffa9f84-a923-4ded-8dc7-a5b69acd6464","Type":"ContainerStarted","Data":"4e0471d1cfa916ece5f9eda6fe5b911bf3e0ffdf03d02351818d446d70fa2cf5"} Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.603361 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62ffcfe9-3e93-48ee-8d03-9b653d1bfede","Type":"ContainerStarted","Data":"4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c"} Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.603850 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.605070 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zr447" event={"ID":"7b09562c-f4c4-425a-a400-113e913a8031","Type":"ContainerDied","Data":"02ab73bd66c63371755f918664c31562b4708866193297b9852f935d823eee65"} Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.605093 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ab73bd66c63371755f918664c31562b4708866193297b9852f935d823eee65" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.605129 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zr447" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.614856 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2bn6v" event={"ID":"2c03cd98-3721-4e1d-9a3c-5f0547f067ff","Type":"ContainerDied","Data":"92e5c4ce46de5ac29d7137cbb8f086f72e1f13559dd4eb8c478556172acf4466"} Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.614885 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92e5c4ce46de5ac29d7137cbb8f086f72e1f13559dd4eb8c478556172acf4466" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.614925 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2bn6v" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.617353 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pv2qx-config-bbr8j" event={"ID":"b8d7a3f1-63b2-41d0-8024-c1634cde5870","Type":"ContainerStarted","Data":"08aeaa3adcbe3d328a7df12c2e74a48b26c91b57eb23db104799654eb3b9e3e4"} Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.617389 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pv2qx-config-bbr8j" event={"ID":"b8d7a3f1-63b2-41d0-8024-c1634cde5870","Type":"ContainerStarted","Data":"48355d3b29e40ad6110ca204529c22376cc5cb12f718806e9fc594d928239400"} Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.625739 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=57.581484387 podStartE2EDuration="1m4.625719836s" podCreationTimestamp="2026-01-27 17:15:50 +0000 UTC" firstStartedPulling="2026-01-27 17:16:03.594911684 +0000 UTC m=+1138.693885233" lastFinishedPulling="2026-01-27 17:16:10.639147133 +0000 UTC m=+1145.738120682" observedRunningTime="2026-01-27 17:16:54.622377249 +0000 UTC m=+1189.721350808" watchObservedRunningTime="2026-01-27 17:16:54.625719836 +0000 UTC m=+1189.724693395" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.670534 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pv2qx-config-bbr8j" podStartSLOduration=9.670515768 podStartE2EDuration="9.670515768s" podCreationTimestamp="2026-01-27 17:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:16:54.668986744 +0000 UTC m=+1189.767960293" watchObservedRunningTime="2026-01-27 17:16:54.670515768 +0000 UTC m=+1189.769489317" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.675641 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.114269562 podStartE2EDuration="1m4.675625585s" podCreationTimestamp="2026-01-27 17:15:50 +0000 UTC" firstStartedPulling="2026-01-27 17:16:03.55420816 +0000 UTC m=+1138.653181709" lastFinishedPulling="2026-01-27 17:16:11.115564183 +0000 UTC m=+1146.214537732" observedRunningTime="2026-01-27 17:16:54.656344379 +0000 UTC m=+1189.755317928" watchObservedRunningTime="2026-01-27 17:16:54.675625585 +0000 UTC m=+1189.774599124" Jan 27 17:16:54 crc kubenswrapper[5049]: I0127 17:16:54.686367 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dhqbn" podStartSLOduration=2.226426448 podStartE2EDuration="19.686353135s" podCreationTimestamp="2026-01-27 17:16:35 +0000 UTC" firstStartedPulling="2026-01-27 17:16:36.313327784 +0000 UTC m=+1171.412301333" lastFinishedPulling="2026-01-27 17:16:53.773254461 +0000 UTC m=+1188.872228020" observedRunningTime="2026-01-27 17:16:54.682746211 +0000 UTC m=+1189.781719780" watchObservedRunningTime="2026-01-27 17:16:54.686353135 +0000 UTC m=+1189.785326684" Jan 27 17:16:55 crc kubenswrapper[5049]: I0127 17:16:55.283690 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-pv2qx" Jan 27 17:16:55 crc kubenswrapper[5049]: I0127 17:16:55.625935 5049 generic.go:334] "Generic (PLEG): container finished" podID="b8d7a3f1-63b2-41d0-8024-c1634cde5870" containerID="08aeaa3adcbe3d328a7df12c2e74a48b26c91b57eb23db104799654eb3b9e3e4" exitCode=0 Jan 27 17:16:55 crc kubenswrapper[5049]: I0127 17:16:55.626084 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pv2qx-config-bbr8j" event={"ID":"b8d7a3f1-63b2-41d0-8024-c1634cde5870","Type":"ContainerDied","Data":"08aeaa3adcbe3d328a7df12c2e74a48b26c91b57eb23db104799654eb3b9e3e4"} Jan 27 17:16:56 crc kubenswrapper[5049]: I0127 17:16:56.959343 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074225 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwjjb\" (UniqueName: \"kubernetes.io/projected/b8d7a3f1-63b2-41d0-8024-c1634cde5870-kube-api-access-wwjjb\") pod \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074309 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-log-ovn\") pod \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074368 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-additional-scripts\") pod \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074461 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run-ovn\") pod \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074470 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b8d7a3f1-63b2-41d0-8024-c1634cde5870" (UID: "b8d7a3f1-63b2-41d0-8024-c1634cde5870"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074516 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-scripts\") pod \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074532 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b8d7a3f1-63b2-41d0-8024-c1634cde5870" (UID: "b8d7a3f1-63b2-41d0-8024-c1634cde5870"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074561 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run\") pod \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\" (UID: \"b8d7a3f1-63b2-41d0-8024-c1634cde5870\") " Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.074846 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run" (OuterVolumeSpecName: "var-run") pod "b8d7a3f1-63b2-41d0-8024-c1634cde5870" (UID: "b8d7a3f1-63b2-41d0-8024-c1634cde5870"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.075292 5049 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.075335 5049 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.075356 5049 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b8d7a3f1-63b2-41d0-8024-c1634cde5870-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.075342 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b8d7a3f1-63b2-41d0-8024-c1634cde5870" (UID: "b8d7a3f1-63b2-41d0-8024-c1634cde5870"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.075905 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-scripts" (OuterVolumeSpecName: "scripts") pod "b8d7a3f1-63b2-41d0-8024-c1634cde5870" (UID: "b8d7a3f1-63b2-41d0-8024-c1634cde5870"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.083402 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d7a3f1-63b2-41d0-8024-c1634cde5870-kube-api-access-wwjjb" (OuterVolumeSpecName: "kube-api-access-wwjjb") pod "b8d7a3f1-63b2-41d0-8024-c1634cde5870" (UID: "b8d7a3f1-63b2-41d0-8024-c1634cde5870"). InnerVolumeSpecName "kube-api-access-wwjjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.177780 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwjjb\" (UniqueName: \"kubernetes.io/projected/b8d7a3f1-63b2-41d0-8024-c1634cde5870-kube-api-access-wwjjb\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.177818 5049 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.177832 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b8d7a3f1-63b2-41d0-8024-c1634cde5870-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.284732 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pv2qx-config-bbr8j"] Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.293173 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pv2qx-config-bbr8j"] Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.640189 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48355d3b29e40ad6110ca204529c22376cc5cb12f718806e9fc594d928239400" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.640292 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pv2qx-config-bbr8j" Jan 27 17:16:57 crc kubenswrapper[5049]: I0127 17:16:57.654194 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d7a3f1-63b2-41d0-8024-c1634cde5870" path="/var/lib/kubelet/pods/b8d7a3f1-63b2-41d0-8024-c1634cde5870/volumes" Jan 27 17:16:59 crc kubenswrapper[5049]: I0127 17:16:59.614864 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:59 crc kubenswrapper[5049]: I0127 17:16:59.631840 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"swift-storage-0\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " pod="openstack/swift-storage-0" Jan 27 17:16:59 crc kubenswrapper[5049]: I0127 17:16:59.760595 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 17:17:00 crc kubenswrapper[5049]: I0127 17:17:00.330572 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 17:17:00 crc kubenswrapper[5049]: I0127 17:17:00.664993 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"ad82617d8aed3b32808c93afe54d1c8ea6d727e32e61acf0b8b6c755610cdf61"} Jan 27 17:17:01 crc kubenswrapper[5049]: I0127 17:17:01.673977 5049 generic.go:334] "Generic (PLEG): container finished" podID="9ffa9f84-a923-4ded-8dc7-a5b69acd6464" containerID="4e0471d1cfa916ece5f9eda6fe5b911bf3e0ffdf03d02351818d446d70fa2cf5" exitCode=0 Jan 27 17:17:01 crc kubenswrapper[5049]: I0127 17:17:01.674041 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dhqbn" event={"ID":"9ffa9f84-a923-4ded-8dc7-a5b69acd6464","Type":"ContainerDied","Data":"4e0471d1cfa916ece5f9eda6fe5b911bf3e0ffdf03d02351818d446d70fa2cf5"} Jan 27 17:17:02 crc kubenswrapper[5049]: I0127 17:17:02.685762 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076"} Jan 27 17:17:02 crc kubenswrapper[5049]: I0127 17:17:02.686112 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61"} Jan 27 17:17:02 crc kubenswrapper[5049]: I0127 17:17:02.686135 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049"} Jan 27 17:17:02 crc kubenswrapper[5049]: I0127 17:17:02.686153 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25"} Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.175063 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dhqbn" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.273059 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvf4w\" (UniqueName: \"kubernetes.io/projected/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-kube-api-access-qvf4w\") pod \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.273201 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-db-sync-config-data\") pod \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.273222 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-config-data\") pod \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.273325 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-combined-ca-bundle\") pod \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\" (UID: \"9ffa9f84-a923-4ded-8dc7-a5b69acd6464\") " Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.279217 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-kube-api-access-qvf4w" (OuterVolumeSpecName: "kube-api-access-qvf4w") pod "9ffa9f84-a923-4ded-8dc7-a5b69acd6464" (UID: "9ffa9f84-a923-4ded-8dc7-a5b69acd6464"). InnerVolumeSpecName "kube-api-access-qvf4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.280962 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9ffa9f84-a923-4ded-8dc7-a5b69acd6464" (UID: "9ffa9f84-a923-4ded-8dc7-a5b69acd6464"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.306778 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ffa9f84-a923-4ded-8dc7-a5b69acd6464" (UID: "9ffa9f84-a923-4ded-8dc7-a5b69acd6464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.318993 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-config-data" (OuterVolumeSpecName: "config-data") pod "9ffa9f84-a923-4ded-8dc7-a5b69acd6464" (UID: "9ffa9f84-a923-4ded-8dc7-a5b69acd6464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.374623 5049 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.374656 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.374665 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.374703 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvf4w\" (UniqueName: \"kubernetes.io/projected/9ffa9f84-a923-4ded-8dc7-a5b69acd6464-kube-api-access-qvf4w\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.694371 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dhqbn" event={"ID":"9ffa9f84-a923-4ded-8dc7-a5b69acd6464","Type":"ContainerDied","Data":"04dc73829ca16c0d856001527b7e86f1c18d0b5cdeaf9411efc995d8dfdea9df"} Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.694422 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04dc73829ca16c0d856001527b7e86f1c18d0b5cdeaf9411efc995d8dfdea9df" Jan 27 17:17:03 crc kubenswrapper[5049]: I0127 17:17:03.694433 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dhqbn" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.053441 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-9x9fw"] Jan 27 17:17:04 crc kubenswrapper[5049]: E0127 17:17:04.054496 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c03cd98-3721-4e1d-9a3c-5f0547f067ff" containerName="swift-ring-rebalance" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.054525 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c03cd98-3721-4e1d-9a3c-5f0547f067ff" containerName="swift-ring-rebalance" Jan 27 17:17:04 crc kubenswrapper[5049]: E0127 17:17:04.054551 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b09562c-f4c4-425a-a400-113e913a8031" containerName="mariadb-account-create-update" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.054559 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b09562c-f4c4-425a-a400-113e913a8031" containerName="mariadb-account-create-update" Jan 27 17:17:04 crc kubenswrapper[5049]: E0127 17:17:04.054576 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ffa9f84-a923-4ded-8dc7-a5b69acd6464" containerName="glance-db-sync" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.054584 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ffa9f84-a923-4ded-8dc7-a5b69acd6464" containerName="glance-db-sync" Jan 27 17:17:04 crc kubenswrapper[5049]: E0127 17:17:04.054602 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d7a3f1-63b2-41d0-8024-c1634cde5870" containerName="ovn-config" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.054610 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d7a3f1-63b2-41d0-8024-c1634cde5870" containerName="ovn-config" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.054844 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d7a3f1-63b2-41d0-8024-c1634cde5870" containerName="ovn-config" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.054860 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b09562c-f4c4-425a-a400-113e913a8031" containerName="mariadb-account-create-update" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.054870 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ffa9f84-a923-4ded-8dc7-a5b69acd6464" containerName="glance-db-sync" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.054890 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c03cd98-3721-4e1d-9a3c-5f0547f067ff" containerName="swift-ring-rebalance" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.055805 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.085727 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k7jm\" (UniqueName: \"kubernetes.io/projected/e55ea905-97a1-4d37-82f6-6c0b44dde090-kube-api-access-5k7jm\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.085828 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-config\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.085878 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.085920 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.085938 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.087627 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-9x9fw"] Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.187885 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-config\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.187973 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.188033 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.188056 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.188103 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k7jm\" (UniqueName: \"kubernetes.io/projected/e55ea905-97a1-4d37-82f6-6c0b44dde090-kube-api-access-5k7jm\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.189413 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-config\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.189798 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.190361 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.190622 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.204475 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k7jm\" (UniqueName: \"kubernetes.io/projected/e55ea905-97a1-4d37-82f6-6c0b44dde090-kube-api-access-5k7jm\") pod \"dnsmasq-dns-5b946c75cc-9x9fw\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.443074 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.713920 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a"} Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.714221 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537"} Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.714234 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3"} Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.714245 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470"} Jan 27 17:17:04 crc kubenswrapper[5049]: I0127 17:17:04.914237 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-9x9fw"] Jan 27 17:17:04 crc kubenswrapper[5049]: W0127 17:17:04.920578 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode55ea905_97a1_4d37_82f6_6c0b44dde090.slice/crio-09a6cf0bdc45374ef797586a9062a7a510a80e55ef913a900256cf1516b8dd5c WatchSource:0}: Error finding container 09a6cf0bdc45374ef797586a9062a7a510a80e55ef913a900256cf1516b8dd5c: Status 404 returned error can't find the container with id 09a6cf0bdc45374ef797586a9062a7a510a80e55ef913a900256cf1516b8dd5c Jan 27 17:17:05 crc kubenswrapper[5049]: I0127 17:17:05.740309 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055"} Jan 27 17:17:05 crc kubenswrapper[5049]: I0127 17:17:05.742434 5049 generic.go:334] "Generic (PLEG): container finished" podID="e55ea905-97a1-4d37-82f6-6c0b44dde090" containerID="33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735" exitCode=0 Jan 27 17:17:05 crc kubenswrapper[5049]: I0127 17:17:05.742467 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" event={"ID":"e55ea905-97a1-4d37-82f6-6c0b44dde090","Type":"ContainerDied","Data":"33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735"} Jan 27 17:17:05 crc kubenswrapper[5049]: I0127 17:17:05.742515 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" event={"ID":"e55ea905-97a1-4d37-82f6-6c0b44dde090","Type":"ContainerStarted","Data":"09a6cf0bdc45374ef797586a9062a7a510a80e55ef913a900256cf1516b8dd5c"} Jan 27 17:17:06 crc kubenswrapper[5049]: I0127 17:17:06.752560 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" event={"ID":"e55ea905-97a1-4d37-82f6-6c0b44dde090","Type":"ContainerStarted","Data":"11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1"} Jan 27 17:17:06 crc kubenswrapper[5049]: I0127 17:17:06.752885 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:06 crc kubenswrapper[5049]: I0127 17:17:06.759486 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1"} Jan 27 17:17:06 crc kubenswrapper[5049]: I0127 17:17:06.759531 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec"} Jan 27 17:17:06 crc kubenswrapper[5049]: I0127 17:17:06.759541 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0"} Jan 27 17:17:07 crc kubenswrapper[5049]: I0127 17:17:07.775403 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b"} Jan 27 17:17:07 crc kubenswrapper[5049]: I0127 17:17:07.775734 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e"} Jan 27 17:17:07 crc kubenswrapper[5049]: I0127 17:17:07.775746 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerStarted","Data":"49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084"} Jan 27 17:17:07 crc kubenswrapper[5049]: I0127 17:17:07.813839 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" podStartSLOduration=3.813819329 podStartE2EDuration="3.813819329s" podCreationTimestamp="2026-01-27 17:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:06.800700811 +0000 UTC m=+1201.899674360" watchObservedRunningTime="2026-01-27 17:17:07.813819329 +0000 UTC m=+1202.912792878" Jan 27 17:17:07 crc kubenswrapper[5049]: I0127 17:17:07.816210 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.735714761 podStartE2EDuration="41.816199787s" podCreationTimestamp="2026-01-27 17:16:26 +0000 UTC" firstStartedPulling="2026-01-27 17:17:00.335362097 +0000 UTC m=+1195.434335646" lastFinishedPulling="2026-01-27 17:17:05.415847123 +0000 UTC m=+1200.514820672" observedRunningTime="2026-01-27 17:17:07.812058898 +0000 UTC m=+1202.911032457" watchObservedRunningTime="2026-01-27 17:17:07.816199787 +0000 UTC m=+1202.915173326" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.089277 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-9x9fw"] Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.139561 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wrndw"] Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.149861 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.151085 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wrndw"] Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.153546 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.267720 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.267784 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.267812 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.267832 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtcz\" (UniqueName: \"kubernetes.io/projected/90cf84a1-03e1-46a5-96a1-8cddf4312669-kube-api-access-kmtcz\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.268207 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.268260 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-config\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.369392 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.369437 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmtcz\" (UniqueName: \"kubernetes.io/projected/90cf84a1-03e1-46a5-96a1-8cddf4312669-kube-api-access-kmtcz\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.369544 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.369572 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-config\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.369606 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.369647 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.370379 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.370386 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.370486 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.370561 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-config\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.371750 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.398005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmtcz\" (UniqueName: \"kubernetes.io/projected/90cf84a1-03e1-46a5-96a1-8cddf4312669-kube-api-access-kmtcz\") pod \"dnsmasq-dns-74f6bcbc87-wrndw\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.476700 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.784013 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" podUID="e55ea905-97a1-4d37-82f6-6c0b44dde090" containerName="dnsmasq-dns" containerID="cri-o://11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1" gracePeriod=10 Jan 27 17:17:08 crc kubenswrapper[5049]: I0127 17:17:08.930784 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wrndw"] Jan 27 17:17:08 crc kubenswrapper[5049]: W0127 17:17:08.939035 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90cf84a1_03e1_46a5_96a1_8cddf4312669.slice/crio-cdf132c74a722dedf4d49bf7eded8266fb980bef35fd6162ed61624642291f51 WatchSource:0}: Error finding container cdf132c74a722dedf4d49bf7eded8266fb980bef35fd6162ed61624642291f51: Status 404 returned error can't find the container with id cdf132c74a722dedf4d49bf7eded8266fb980bef35fd6162ed61624642291f51 Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.274505 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.385333 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-sb\") pod \"e55ea905-97a1-4d37-82f6-6c0b44dde090\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.385498 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k7jm\" (UniqueName: \"kubernetes.io/projected/e55ea905-97a1-4d37-82f6-6c0b44dde090-kube-api-access-5k7jm\") pod \"e55ea905-97a1-4d37-82f6-6c0b44dde090\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.385554 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-config\") pod \"e55ea905-97a1-4d37-82f6-6c0b44dde090\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.385579 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-nb\") pod \"e55ea905-97a1-4d37-82f6-6c0b44dde090\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.385794 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-dns-svc\") pod \"e55ea905-97a1-4d37-82f6-6c0b44dde090\" (UID: \"e55ea905-97a1-4d37-82f6-6c0b44dde090\") " Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.391443 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55ea905-97a1-4d37-82f6-6c0b44dde090-kube-api-access-5k7jm" (OuterVolumeSpecName: "kube-api-access-5k7jm") pod "e55ea905-97a1-4d37-82f6-6c0b44dde090" (UID: "e55ea905-97a1-4d37-82f6-6c0b44dde090"). InnerVolumeSpecName "kube-api-access-5k7jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.424962 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e55ea905-97a1-4d37-82f6-6c0b44dde090" (UID: "e55ea905-97a1-4d37-82f6-6c0b44dde090"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.425488 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e55ea905-97a1-4d37-82f6-6c0b44dde090" (UID: "e55ea905-97a1-4d37-82f6-6c0b44dde090"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.426083 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e55ea905-97a1-4d37-82f6-6c0b44dde090" (UID: "e55ea905-97a1-4d37-82f6-6c0b44dde090"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.428902 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-config" (OuterVolumeSpecName: "config") pod "e55ea905-97a1-4d37-82f6-6c0b44dde090" (UID: "e55ea905-97a1-4d37-82f6-6c0b44dde090"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.488180 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.488213 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k7jm\" (UniqueName: \"kubernetes.io/projected/e55ea905-97a1-4d37-82f6-6c0b44dde090-kube-api-access-5k7jm\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.488224 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.488233 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.488241 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55ea905-97a1-4d37-82f6-6c0b44dde090-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.798973 5049 generic.go:334] "Generic (PLEG): container finished" podID="e55ea905-97a1-4d37-82f6-6c0b44dde090" containerID="11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1" exitCode=0 Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.799037 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" event={"ID":"e55ea905-97a1-4d37-82f6-6c0b44dde090","Type":"ContainerDied","Data":"11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1"} Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.799073 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" event={"ID":"e55ea905-97a1-4d37-82f6-6c0b44dde090","Type":"ContainerDied","Data":"09a6cf0bdc45374ef797586a9062a7a510a80e55ef913a900256cf1516b8dd5c"} Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.799093 5049 scope.go:117] "RemoveContainer" containerID="11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.799124 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-9x9fw" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.803277 5049 generic.go:334] "Generic (PLEG): container finished" podID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerID="a09c8be9e60e8da8466e088d675944d29a3b1e004e1b8ce8a001af4f420ad9cc" exitCode=0 Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.803319 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" event={"ID":"90cf84a1-03e1-46a5-96a1-8cddf4312669","Type":"ContainerDied","Data":"a09c8be9e60e8da8466e088d675944d29a3b1e004e1b8ce8a001af4f420ad9cc"} Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.803344 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" event={"ID":"90cf84a1-03e1-46a5-96a1-8cddf4312669","Type":"ContainerStarted","Data":"cdf132c74a722dedf4d49bf7eded8266fb980bef35fd6162ed61624642291f51"} Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.829380 5049 scope.go:117] "RemoveContainer" containerID="33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.831110 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-9x9fw"] Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.837843 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-9x9fw"] Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.854884 5049 scope.go:117] "RemoveContainer" containerID="11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1" Jan 27 17:17:09 crc kubenswrapper[5049]: E0127 17:17:09.857318 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1\": container with ID starting with 11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1 not found: ID does not exist" containerID="11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.857362 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1"} err="failed to get container status \"11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1\": rpc error: code = NotFound desc = could not find container \"11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1\": container with ID starting with 11d6ab08fb6419794c4fc7c9273e1e544e723c9ee6ccbf83f9ade3747a09e9e1 not found: ID does not exist" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.857390 5049 scope.go:117] "RemoveContainer" containerID="33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735" Jan 27 17:17:09 crc kubenswrapper[5049]: E0127 17:17:09.858446 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735\": container with ID starting with 33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735 not found: ID does not exist" containerID="33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735" Jan 27 17:17:09 crc kubenswrapper[5049]: I0127 17:17:09.858472 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735"} err="failed to get container status \"33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735\": rpc error: code = NotFound desc = could not find container \"33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735\": container with ID starting with 33ad41194e59644c75bccd081038cd5f0ce9d0aa4af66744cb0384bf233f2735 not found: ID does not exist" Jan 27 17:17:10 crc kubenswrapper[5049]: I0127 17:17:10.813870 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" event={"ID":"90cf84a1-03e1-46a5-96a1-8cddf4312669","Type":"ContainerStarted","Data":"843c7623ef79919c967dd028bf2b8986ba3d72e3eb1574bd51bbde06bbbf7480"} Jan 27 17:17:10 crc kubenswrapper[5049]: I0127 17:17:10.814872 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:10 crc kubenswrapper[5049]: I0127 17:17:10.839410 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" podStartSLOduration=2.839386373 podStartE2EDuration="2.839386373s" podCreationTimestamp="2026-01-27 17:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:10.836196311 +0000 UTC m=+1205.935169860" watchObservedRunningTime="2026-01-27 17:17:10.839386373 +0000 UTC m=+1205.938359932" Jan 27 17:17:11 crc kubenswrapper[5049]: I0127 17:17:11.663086 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55ea905-97a1-4d37-82f6-6c0b44dde090" path="/var/lib/kubelet/pods/e55ea905-97a1-4d37-82f6-6c0b44dde090/volumes" Jan 27 17:17:11 crc kubenswrapper[5049]: I0127 17:17:11.760157 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:17:12 crc kubenswrapper[5049]: I0127 17:17:12.025026 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.545021 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-x4ljv"] Jan 27 17:17:13 crc kubenswrapper[5049]: E0127 17:17:13.545626 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55ea905-97a1-4d37-82f6-6c0b44dde090" containerName="dnsmasq-dns" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.545640 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55ea905-97a1-4d37-82f6-6c0b44dde090" containerName="dnsmasq-dns" Jan 27 17:17:13 crc kubenswrapper[5049]: E0127 17:17:13.545685 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55ea905-97a1-4d37-82f6-6c0b44dde090" containerName="init" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.545691 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55ea905-97a1-4d37-82f6-6c0b44dde090" containerName="init" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.545837 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55ea905-97a1-4d37-82f6-6c0b44dde090" containerName="dnsmasq-dns" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.546350 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.553965 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-x4ljv"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.676023 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfkfz\" (UniqueName: \"kubernetes.io/projected/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-kube-api-access-nfkfz\") pod \"cinder-db-create-x4ljv\" (UID: \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\") " pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.677605 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-operator-scripts\") pod \"cinder-db-create-x4ljv\" (UID: \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\") " pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.693284 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-csqpn"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.694459 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.715017 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-973c-account-create-update-vlmrw"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.716228 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.724070 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.724890 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-csqpn"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.747289 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-973c-account-create-update-vlmrw"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.779270 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-37a8-account-create-update-2pxzq"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.780740 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.783959 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.784457 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-37a8-account-create-update-2pxzq"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.791550 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfkfz\" (UniqueName: \"kubernetes.io/projected/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-kube-api-access-nfkfz\") pod \"cinder-db-create-x4ljv\" (UID: \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\") " pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.791683 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-operator-scripts\") pod \"cinder-db-create-x4ljv\" (UID: \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\") " pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.792779 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-operator-scripts\") pod \"cinder-db-create-x4ljv\" (UID: \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\") " pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.813904 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfkfz\" (UniqueName: \"kubernetes.io/projected/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-kube-api-access-nfkfz\") pod \"cinder-db-create-x4ljv\" (UID: \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\") " pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.854475 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-99mrr"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.855485 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.867081 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-99mrr"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.893608 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdddc\" (UniqueName: \"kubernetes.io/projected/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-kube-api-access-fdddc\") pod \"barbican-db-create-csqpn\" (UID: \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\") " pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.893734 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-operator-scripts\") pod \"barbican-db-create-csqpn\" (UID: \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\") " pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.893847 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc638766-f495-40bf-b04e-017d19ca3361-operator-scripts\") pod \"barbican-973c-account-create-update-vlmrw\" (UID: \"dc638766-f495-40bf-b04e-017d19ca3361\") " pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.894029 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f58t\" (UniqueName: \"kubernetes.io/projected/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-kube-api-access-5f58t\") pod \"cinder-37a8-account-create-update-2pxzq\" (UID: \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\") " pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.894087 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-operator-scripts\") pod \"cinder-37a8-account-create-update-2pxzq\" (UID: \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\") " pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.894114 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jctn\" (UniqueName: \"kubernetes.io/projected/dc638766-f495-40bf-b04e-017d19ca3361-kube-api-access-9jctn\") pod \"barbican-973c-account-create-update-vlmrw\" (UID: \"dc638766-f495-40bf-b04e-017d19ca3361\") " pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.910456 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.917041 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-87m2z"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.918072 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.921155 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.921190 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.921365 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.921451 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ztd82" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.925465 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-87m2z"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.960209 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d77-account-create-update-pf4g2"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.961247 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.964493 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.973342 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d77-account-create-update-pf4g2"] Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.995195 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqkl\" (UniqueName: \"kubernetes.io/projected/fcaf6f1b-c353-436b-aeb6-23344442588b-kube-api-access-7rqkl\") pod \"neutron-db-create-99mrr\" (UID: \"fcaf6f1b-c353-436b-aeb6-23344442588b\") " pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.995275 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcaf6f1b-c353-436b-aeb6-23344442588b-operator-scripts\") pod \"neutron-db-create-99mrr\" (UID: \"fcaf6f1b-c353-436b-aeb6-23344442588b\") " pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.995329 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f58t\" (UniqueName: \"kubernetes.io/projected/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-kube-api-access-5f58t\") pod \"cinder-37a8-account-create-update-2pxzq\" (UID: \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\") " pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.995371 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-operator-scripts\") pod \"cinder-37a8-account-create-update-2pxzq\" (UID: \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\") " pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.995402 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jctn\" (UniqueName: \"kubernetes.io/projected/dc638766-f495-40bf-b04e-017d19ca3361-kube-api-access-9jctn\") pod \"barbican-973c-account-create-update-vlmrw\" (UID: \"dc638766-f495-40bf-b04e-017d19ca3361\") " pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.995462 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdddc\" (UniqueName: \"kubernetes.io/projected/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-kube-api-access-fdddc\") pod \"barbican-db-create-csqpn\" (UID: \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\") " pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.995492 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-operator-scripts\") pod \"barbican-db-create-csqpn\" (UID: \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\") " pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.995536 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc638766-f495-40bf-b04e-017d19ca3361-operator-scripts\") pod \"barbican-973c-account-create-update-vlmrw\" (UID: \"dc638766-f495-40bf-b04e-017d19ca3361\") " pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.996127 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-operator-scripts\") pod \"cinder-37a8-account-create-update-2pxzq\" (UID: \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\") " pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.996403 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc638766-f495-40bf-b04e-017d19ca3361-operator-scripts\") pod \"barbican-973c-account-create-update-vlmrw\" (UID: \"dc638766-f495-40bf-b04e-017d19ca3361\") " pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:13 crc kubenswrapper[5049]: I0127 17:17:13.996470 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-operator-scripts\") pod \"barbican-db-create-csqpn\" (UID: \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\") " pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.013332 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f58t\" (UniqueName: \"kubernetes.io/projected/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-kube-api-access-5f58t\") pod \"cinder-37a8-account-create-update-2pxzq\" (UID: \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\") " pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.024312 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdddc\" (UniqueName: \"kubernetes.io/projected/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-kube-api-access-fdddc\") pod \"barbican-db-create-csqpn\" (UID: \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\") " pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.026104 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.026296 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jctn\" (UniqueName: \"kubernetes.io/projected/dc638766-f495-40bf-b04e-017d19ca3361-kube-api-access-9jctn\") pod \"barbican-973c-account-create-update-vlmrw\" (UID: \"dc638766-f495-40bf-b04e-017d19ca3361\") " pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.040338 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.096885 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-combined-ca-bundle\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.096939 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqkl\" (UniqueName: \"kubernetes.io/projected/fcaf6f1b-c353-436b-aeb6-23344442588b-kube-api-access-7rqkl\") pod \"neutron-db-create-99mrr\" (UID: \"fcaf6f1b-c353-436b-aeb6-23344442588b\") " pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.096961 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcktw\" (UniqueName: \"kubernetes.io/projected/56f1e2e4-7888-40f5-962c-2298aaa75d60-kube-api-access-lcktw\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.096984 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16eba5e-1610-465b-b346-51692b4d7ad0-operator-scripts\") pod \"neutron-6d77-account-create-update-pf4g2\" (UID: \"d16eba5e-1610-465b-b346-51692b4d7ad0\") " pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.097004 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-config-data\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.097036 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcaf6f1b-c353-436b-aeb6-23344442588b-operator-scripts\") pod \"neutron-db-create-99mrr\" (UID: \"fcaf6f1b-c353-436b-aeb6-23344442588b\") " pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.097070 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjl7r\" (UniqueName: \"kubernetes.io/projected/d16eba5e-1610-465b-b346-51692b4d7ad0-kube-api-access-bjl7r\") pod \"neutron-6d77-account-create-update-pf4g2\" (UID: \"d16eba5e-1610-465b-b346-51692b4d7ad0\") " pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.098269 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcaf6f1b-c353-436b-aeb6-23344442588b-operator-scripts\") pod \"neutron-db-create-99mrr\" (UID: \"fcaf6f1b-c353-436b-aeb6-23344442588b\") " pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.107810 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.116929 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqkl\" (UniqueName: \"kubernetes.io/projected/fcaf6f1b-c353-436b-aeb6-23344442588b-kube-api-access-7rqkl\") pod \"neutron-db-create-99mrr\" (UID: \"fcaf6f1b-c353-436b-aeb6-23344442588b\") " pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.173570 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.203147 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjl7r\" (UniqueName: \"kubernetes.io/projected/d16eba5e-1610-465b-b346-51692b4d7ad0-kube-api-access-bjl7r\") pod \"neutron-6d77-account-create-update-pf4g2\" (UID: \"d16eba5e-1610-465b-b346-51692b4d7ad0\") " pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.203267 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-combined-ca-bundle\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.203303 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcktw\" (UniqueName: \"kubernetes.io/projected/56f1e2e4-7888-40f5-962c-2298aaa75d60-kube-api-access-lcktw\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.203326 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16eba5e-1610-465b-b346-51692b4d7ad0-operator-scripts\") pod \"neutron-6d77-account-create-update-pf4g2\" (UID: \"d16eba5e-1610-465b-b346-51692b4d7ad0\") " pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.203342 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-config-data\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.205809 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16eba5e-1610-465b-b346-51692b4d7ad0-operator-scripts\") pod \"neutron-6d77-account-create-update-pf4g2\" (UID: \"d16eba5e-1610-465b-b346-51692b4d7ad0\") " pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.211254 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-combined-ca-bundle\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.218411 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-config-data\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.222895 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjl7r\" (UniqueName: \"kubernetes.io/projected/d16eba5e-1610-465b-b346-51692b4d7ad0-kube-api-access-bjl7r\") pod \"neutron-6d77-account-create-update-pf4g2\" (UID: \"d16eba5e-1610-465b-b346-51692b4d7ad0\") " pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.229055 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcktw\" (UniqueName: \"kubernetes.io/projected/56f1e2e4-7888-40f5-962c-2298aaa75d60-kube-api-access-lcktw\") pod \"keystone-db-sync-87m2z\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.262621 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.428775 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.487017 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-x4ljv"] Jan 27 17:17:14 crc kubenswrapper[5049]: W0127 17:17:14.492650 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4107b32c_cf40_4fe7_bd5b_00c00ff476f8.slice/crio-600a6b7cef833fe9680e4ca579f5b0bcd0c822dc53f74ad97d6c5fa0dadc6015 WatchSource:0}: Error finding container 600a6b7cef833fe9680e4ca579f5b0bcd0c822dc53f74ad97d6c5fa0dadc6015: Status 404 returned error can't find the container with id 600a6b7cef833fe9680e4ca579f5b0bcd0c822dc53f74ad97d6c5fa0dadc6015 Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.594606 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-csqpn"] Jan 27 17:17:14 crc kubenswrapper[5049]: W0127 17:17:14.605552 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode466ed3f_24cb_4a9a_9820_c4f5a31b7982.slice/crio-c68256fbb994bf8571c2c570b7ea6db03e8fc0dc0ae6414cb86684b18620543b WatchSource:0}: Error finding container c68256fbb994bf8571c2c570b7ea6db03e8fc0dc0ae6414cb86684b18620543b: Status 404 returned error can't find the container with id c68256fbb994bf8571c2c570b7ea6db03e8fc0dc0ae6414cb86684b18620543b Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.714625 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-973c-account-create-update-vlmrw"] Jan 27 17:17:14 crc kubenswrapper[5049]: W0127 17:17:14.722958 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod755c33ae_a2bd_4f5f_bf3b_3b9d094bc0a3.slice/crio-90536afdfda343587b96d14881db010f9cb1dbfa8ccef26ef183a8ed7a6504be WatchSource:0}: Error finding container 90536afdfda343587b96d14881db010f9cb1dbfa8ccef26ef183a8ed7a6504be: Status 404 returned error can't find the container with id 90536afdfda343587b96d14881db010f9cb1dbfa8ccef26ef183a8ed7a6504be Jan 27 17:17:14 crc kubenswrapper[5049]: W0127 17:17:14.723160 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc638766_f495_40bf_b04e_017d19ca3361.slice/crio-bd2829968ec9542904eb54fc44ba00e13a68550cf5dc44cb206ac3289d086c49 WatchSource:0}: Error finding container bd2829968ec9542904eb54fc44ba00e13a68550cf5dc44cb206ac3289d086c49: Status 404 returned error can't find the container with id bd2829968ec9542904eb54fc44ba00e13a68550cf5dc44cb206ac3289d086c49 Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.725283 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-37a8-account-create-update-2pxzq"] Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.810130 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-99mrr"] Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.821196 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-87m2z"] Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.851436 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-csqpn" event={"ID":"e466ed3f-24cb-4a9a-9820-c4f5a31b7982","Type":"ContainerStarted","Data":"bce3f8ac28bbafaaf90ec8f0010712151e873fe41ef7e673be768c9b5aef4e48"} Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.851474 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-csqpn" event={"ID":"e466ed3f-24cb-4a9a-9820-c4f5a31b7982","Type":"ContainerStarted","Data":"c68256fbb994bf8571c2c570b7ea6db03e8fc0dc0ae6414cb86684b18620543b"} Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.853084 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-99mrr" event={"ID":"fcaf6f1b-c353-436b-aeb6-23344442588b","Type":"ContainerStarted","Data":"1d578203ed50a8b53914544664beda9e52e3fd1841eb5ba59dae8d4a8a78aada"} Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.854230 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x4ljv" event={"ID":"4107b32c-cf40-4fe7-bd5b-00c00ff476f8","Type":"ContainerStarted","Data":"b2606a0b66c74e770aad6521163bf92feb7174e6534ebf3f44b0803ed90204d1"} Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.854255 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x4ljv" event={"ID":"4107b32c-cf40-4fe7-bd5b-00c00ff476f8","Type":"ContainerStarted","Data":"600a6b7cef833fe9680e4ca579f5b0bcd0c822dc53f74ad97d6c5fa0dadc6015"} Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.858554 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37a8-account-create-update-2pxzq" event={"ID":"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3","Type":"ContainerStarted","Data":"90536afdfda343587b96d14881db010f9cb1dbfa8ccef26ef183a8ed7a6504be"} Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.862853 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87m2z" event={"ID":"56f1e2e4-7888-40f5-962c-2298aaa75d60","Type":"ContainerStarted","Data":"924f7f0dd12748677e0dd2d15f2e5e982be799ba8db6e9a47cd668c243a9d42b"} Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.864908 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-973c-account-create-update-vlmrw" event={"ID":"dc638766-f495-40bf-b04e-017d19ca3361","Type":"ContainerStarted","Data":"bd2829968ec9542904eb54fc44ba00e13a68550cf5dc44cb206ac3289d086c49"} Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.873827 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-csqpn" podStartSLOduration=1.873808572 podStartE2EDuration="1.873808572s" podCreationTimestamp="2026-01-27 17:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:14.867967324 +0000 UTC m=+1209.966940873" watchObservedRunningTime="2026-01-27 17:17:14.873808572 +0000 UTC m=+1209.972782121" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.882629 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-x4ljv" podStartSLOduration=1.882610576 podStartE2EDuration="1.882610576s" podCreationTimestamp="2026-01-27 17:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:14.880242748 +0000 UTC m=+1209.979216297" watchObservedRunningTime="2026-01-27 17:17:14.882610576 +0000 UTC m=+1209.981584115" Jan 27 17:17:14 crc kubenswrapper[5049]: I0127 17:17:14.981828 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d77-account-create-update-pf4g2"] Jan 27 17:17:14 crc kubenswrapper[5049]: W0127 17:17:14.997659 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd16eba5e_1610_465b_b346_51692b4d7ad0.slice/crio-a2bc2c5276597af02a79b1893b16ae346e02aadb312b446865673678f4fcc6cb WatchSource:0}: Error finding container a2bc2c5276597af02a79b1893b16ae346e02aadb312b446865673678f4fcc6cb: Status 404 returned error can't find the container with id a2bc2c5276597af02a79b1893b16ae346e02aadb312b446865673678f4fcc6cb Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.877986 5049 generic.go:334] "Generic (PLEG): container finished" podID="fcaf6f1b-c353-436b-aeb6-23344442588b" containerID="ce96b42959d702c7c1ddfd5a0a340e66afe6b3a0ac3d7f1366977905c48b5ef8" exitCode=0 Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.878052 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-99mrr" event={"ID":"fcaf6f1b-c353-436b-aeb6-23344442588b","Type":"ContainerDied","Data":"ce96b42959d702c7c1ddfd5a0a340e66afe6b3a0ac3d7f1366977905c48b5ef8"} Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.880319 5049 generic.go:334] "Generic (PLEG): container finished" podID="4107b32c-cf40-4fe7-bd5b-00c00ff476f8" containerID="b2606a0b66c74e770aad6521163bf92feb7174e6534ebf3f44b0803ed90204d1" exitCode=0 Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.880385 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x4ljv" event={"ID":"4107b32c-cf40-4fe7-bd5b-00c00ff476f8","Type":"ContainerDied","Data":"b2606a0b66c74e770aad6521163bf92feb7174e6534ebf3f44b0803ed90204d1"} Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.882170 5049 generic.go:334] "Generic (PLEG): container finished" podID="755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3" containerID="ee5ae698dfe15cec5da501cfca88f038e751a977e00e817c10344908bab2296c" exitCode=0 Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.882297 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37a8-account-create-update-2pxzq" event={"ID":"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3","Type":"ContainerDied","Data":"ee5ae698dfe15cec5da501cfca88f038e751a977e00e817c10344908bab2296c"} Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.883735 5049 generic.go:334] "Generic (PLEG): container finished" podID="d16eba5e-1610-465b-b346-51692b4d7ad0" containerID="d9652b205e581e553a0c9e06258e912e875db1cccc2fc4ed7a75314d3904f38d" exitCode=0 Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.883777 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d77-account-create-update-pf4g2" event={"ID":"d16eba5e-1610-465b-b346-51692b4d7ad0","Type":"ContainerDied","Data":"d9652b205e581e553a0c9e06258e912e875db1cccc2fc4ed7a75314d3904f38d"} Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.883809 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d77-account-create-update-pf4g2" event={"ID":"d16eba5e-1610-465b-b346-51692b4d7ad0","Type":"ContainerStarted","Data":"a2bc2c5276597af02a79b1893b16ae346e02aadb312b446865673678f4fcc6cb"} Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.885163 5049 generic.go:334] "Generic (PLEG): container finished" podID="dc638766-f495-40bf-b04e-017d19ca3361" containerID="2e88a16790a77d57aa2efbb521452dac400308fe716002d7c10d178232b694c3" exitCode=0 Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.885213 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-973c-account-create-update-vlmrw" event={"ID":"dc638766-f495-40bf-b04e-017d19ca3361","Type":"ContainerDied","Data":"2e88a16790a77d57aa2efbb521452dac400308fe716002d7c10d178232b694c3"} Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.886265 5049 generic.go:334] "Generic (PLEG): container finished" podID="e466ed3f-24cb-4a9a-9820-c4f5a31b7982" containerID="bce3f8ac28bbafaaf90ec8f0010712151e873fe41ef7e673be768c9b5aef4e48" exitCode=0 Jan 27 17:17:15 crc kubenswrapper[5049]: I0127 17:17:15.886297 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-csqpn" event={"ID":"e466ed3f-24cb-4a9a-9820-c4f5a31b7982","Type":"ContainerDied","Data":"bce3f8ac28bbafaaf90ec8f0010712151e873fe41ef7e673be768c9b5aef4e48"} Jan 27 17:17:17 crc kubenswrapper[5049]: I0127 17:17:17.781999 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:17:17 crc kubenswrapper[5049]: I0127 17:17:17.782432 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:17:18 crc kubenswrapper[5049]: I0127 17:17:18.478885 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:18 crc kubenswrapper[5049]: I0127 17:17:18.554790 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6ncxg"] Jan 27 17:17:18 crc kubenswrapper[5049]: I0127 17:17:18.555063 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-6ncxg" podUID="47a64870-144b-4e50-a338-4a10e39333d2" containerName="dnsmasq-dns" containerID="cri-o://18a31fb37cd3e2dfb9da97f69a2ef54c149621f0405876f3cb7f425f48e3d989" gracePeriod=10 Jan 27 17:17:18 crc kubenswrapper[5049]: I0127 17:17:18.913757 5049 generic.go:334] "Generic (PLEG): container finished" podID="47a64870-144b-4e50-a338-4a10e39333d2" containerID="18a31fb37cd3e2dfb9da97f69a2ef54c149621f0405876f3cb7f425f48e3d989" exitCode=0 Jan 27 17:17:18 crc kubenswrapper[5049]: I0127 17:17:18.913806 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6ncxg" event={"ID":"47a64870-144b-4e50-a338-4a10e39333d2","Type":"ContainerDied","Data":"18a31fb37cd3e2dfb9da97f69a2ef54c149621f0405876f3cb7f425f48e3d989"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.368718 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.375421 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.392009 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.416037 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.437944 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.443443 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.492159 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjl7r\" (UniqueName: \"kubernetes.io/projected/d16eba5e-1610-465b-b346-51692b4d7ad0-kube-api-access-bjl7r\") pod \"d16eba5e-1610-465b-b346-51692b4d7ad0\" (UID: \"d16eba5e-1610-465b-b346-51692b4d7ad0\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.492327 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-operator-scripts\") pod \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\" (UID: \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.492363 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfkfz\" (UniqueName: \"kubernetes.io/projected/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-kube-api-access-nfkfz\") pod \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\" (UID: \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.492388 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16eba5e-1610-465b-b346-51692b4d7ad0-operator-scripts\") pod \"d16eba5e-1610-465b-b346-51692b4d7ad0\" (UID: \"d16eba5e-1610-465b-b346-51692b4d7ad0\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.492412 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-operator-scripts\") pod \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\" (UID: \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.492446 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdddc\" (UniqueName: \"kubernetes.io/projected/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-kube-api-access-fdddc\") pod \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\" (UID: \"e466ed3f-24cb-4a9a-9820-c4f5a31b7982\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.492473 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-operator-scripts\") pod \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\" (UID: \"4107b32c-cf40-4fe7-bd5b-00c00ff476f8\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.492520 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f58t\" (UniqueName: \"kubernetes.io/projected/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-kube-api-access-5f58t\") pod \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\" (UID: \"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.494659 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16eba5e-1610-465b-b346-51692b4d7ad0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d16eba5e-1610-465b-b346-51692b4d7ad0" (UID: "d16eba5e-1610-465b-b346-51692b4d7ad0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.496103 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4107b32c-cf40-4fe7-bd5b-00c00ff476f8" (UID: "4107b32c-cf40-4fe7-bd5b-00c00ff476f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.496309 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3" (UID: "755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.497188 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e466ed3f-24cb-4a9a-9820-c4f5a31b7982" (UID: "e466ed3f-24cb-4a9a-9820-c4f5a31b7982"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.497855 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16eba5e-1610-465b-b346-51692b4d7ad0-kube-api-access-bjl7r" (OuterVolumeSpecName: "kube-api-access-bjl7r") pod "d16eba5e-1610-465b-b346-51692b4d7ad0" (UID: "d16eba5e-1610-465b-b346-51692b4d7ad0"). InnerVolumeSpecName "kube-api-access-bjl7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.499795 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-kube-api-access-fdddc" (OuterVolumeSpecName: "kube-api-access-fdddc") pod "e466ed3f-24cb-4a9a-9820-c4f5a31b7982" (UID: "e466ed3f-24cb-4a9a-9820-c4f5a31b7982"). InnerVolumeSpecName "kube-api-access-fdddc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.504783 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-kube-api-access-5f58t" (OuterVolumeSpecName: "kube-api-access-5f58t") pod "755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3" (UID: "755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3"). InnerVolumeSpecName "kube-api-access-5f58t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.516506 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-kube-api-access-nfkfz" (OuterVolumeSpecName: "kube-api-access-nfkfz") pod "4107b32c-cf40-4fe7-bd5b-00c00ff476f8" (UID: "4107b32c-cf40-4fe7-bd5b-00c00ff476f8"). InnerVolumeSpecName "kube-api-access-nfkfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.593760 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rqkl\" (UniqueName: \"kubernetes.io/projected/fcaf6f1b-c353-436b-aeb6-23344442588b-kube-api-access-7rqkl\") pod \"fcaf6f1b-c353-436b-aeb6-23344442588b\" (UID: \"fcaf6f1b-c353-436b-aeb6-23344442588b\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.593854 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jctn\" (UniqueName: \"kubernetes.io/projected/dc638766-f495-40bf-b04e-017d19ca3361-kube-api-access-9jctn\") pod \"dc638766-f495-40bf-b04e-017d19ca3361\" (UID: \"dc638766-f495-40bf-b04e-017d19ca3361\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.593952 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcaf6f1b-c353-436b-aeb6-23344442588b-operator-scripts\") pod \"fcaf6f1b-c353-436b-aeb6-23344442588b\" (UID: \"fcaf6f1b-c353-436b-aeb6-23344442588b\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594039 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc638766-f495-40bf-b04e-017d19ca3361-operator-scripts\") pod \"dc638766-f495-40bf-b04e-017d19ca3361\" (UID: \"dc638766-f495-40bf-b04e-017d19ca3361\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594343 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594361 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfkfz\" (UniqueName: \"kubernetes.io/projected/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-kube-api-access-nfkfz\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594371 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16eba5e-1610-465b-b346-51692b4d7ad0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594379 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594389 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdddc\" (UniqueName: \"kubernetes.io/projected/e466ed3f-24cb-4a9a-9820-c4f5a31b7982-kube-api-access-fdddc\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594456 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4107b32c-cf40-4fe7-bd5b-00c00ff476f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594467 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f58t\" (UniqueName: \"kubernetes.io/projected/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3-kube-api-access-5f58t\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594476 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjl7r\" (UniqueName: \"kubernetes.io/projected/d16eba5e-1610-465b-b346-51692b4d7ad0-kube-api-access-bjl7r\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.594910 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc638766-f495-40bf-b04e-017d19ca3361-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc638766-f495-40bf-b04e-017d19ca3361" (UID: "dc638766-f495-40bf-b04e-017d19ca3361"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.597767 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcaf6f1b-c353-436b-aeb6-23344442588b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcaf6f1b-c353-436b-aeb6-23344442588b" (UID: "fcaf6f1b-c353-436b-aeb6-23344442588b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.598031 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcaf6f1b-c353-436b-aeb6-23344442588b-kube-api-access-7rqkl" (OuterVolumeSpecName: "kube-api-access-7rqkl") pod "fcaf6f1b-c353-436b-aeb6-23344442588b" (UID: "fcaf6f1b-c353-436b-aeb6-23344442588b"). InnerVolumeSpecName "kube-api-access-7rqkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.598125 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc638766-f495-40bf-b04e-017d19ca3361-kube-api-access-9jctn" (OuterVolumeSpecName: "kube-api-access-9jctn") pod "dc638766-f495-40bf-b04e-017d19ca3361" (UID: "dc638766-f495-40bf-b04e-017d19ca3361"). InnerVolumeSpecName "kube-api-access-9jctn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.601329 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.695389 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-sb\") pod \"47a64870-144b-4e50-a338-4a10e39333d2\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.695444 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6sj8\" (UniqueName: \"kubernetes.io/projected/47a64870-144b-4e50-a338-4a10e39333d2-kube-api-access-q6sj8\") pod \"47a64870-144b-4e50-a338-4a10e39333d2\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.695554 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-dns-svc\") pod \"47a64870-144b-4e50-a338-4a10e39333d2\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.695622 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-nb\") pod \"47a64870-144b-4e50-a338-4a10e39333d2\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.695661 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-config\") pod \"47a64870-144b-4e50-a338-4a10e39333d2\" (UID: \"47a64870-144b-4e50-a338-4a10e39333d2\") " Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.695955 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcaf6f1b-c353-436b-aeb6-23344442588b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.695967 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc638766-f495-40bf-b04e-017d19ca3361-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.695976 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rqkl\" (UniqueName: \"kubernetes.io/projected/fcaf6f1b-c353-436b-aeb6-23344442588b-kube-api-access-7rqkl\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.696115 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jctn\" (UniqueName: \"kubernetes.io/projected/dc638766-f495-40bf-b04e-017d19ca3361-kube-api-access-9jctn\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.699387 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a64870-144b-4e50-a338-4a10e39333d2-kube-api-access-q6sj8" (OuterVolumeSpecName: "kube-api-access-q6sj8") pod "47a64870-144b-4e50-a338-4a10e39333d2" (UID: "47a64870-144b-4e50-a338-4a10e39333d2"). InnerVolumeSpecName "kube-api-access-q6sj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.734980 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-config" (OuterVolumeSpecName: "config") pod "47a64870-144b-4e50-a338-4a10e39333d2" (UID: "47a64870-144b-4e50-a338-4a10e39333d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.744861 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47a64870-144b-4e50-a338-4a10e39333d2" (UID: "47a64870-144b-4e50-a338-4a10e39333d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.746556 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47a64870-144b-4e50-a338-4a10e39333d2" (UID: "47a64870-144b-4e50-a338-4a10e39333d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.749088 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47a64870-144b-4e50-a338-4a10e39333d2" (UID: "47a64870-144b-4e50-a338-4a10e39333d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.797454 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.798047 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.798069 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.798080 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6sj8\" (UniqueName: \"kubernetes.io/projected/47a64870-144b-4e50-a338-4a10e39333d2-kube-api-access-q6sj8\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.798091 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47a64870-144b-4e50-a338-4a10e39333d2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.923587 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-x4ljv" event={"ID":"4107b32c-cf40-4fe7-bd5b-00c00ff476f8","Type":"ContainerDied","Data":"600a6b7cef833fe9680e4ca579f5b0bcd0c822dc53f74ad97d6c5fa0dadc6015"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.923644 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="600a6b7cef833fe9680e4ca579f5b0bcd0c822dc53f74ad97d6c5fa0dadc6015" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.923760 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-x4ljv" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.927396 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37a8-account-create-update-2pxzq" event={"ID":"755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3","Type":"ContainerDied","Data":"90536afdfda343587b96d14881db010f9cb1dbfa8ccef26ef183a8ed7a6504be"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.927452 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90536afdfda343587b96d14881db010f9cb1dbfa8ccef26ef183a8ed7a6504be" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.927544 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37a8-account-create-update-2pxzq" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.931023 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87m2z" event={"ID":"56f1e2e4-7888-40f5-962c-2298aaa75d60","Type":"ContainerStarted","Data":"1a5011d1ce56fb586eae0db1f125d6527f67faabd3172ec43c0043152119152b"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.932825 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d77-account-create-update-pf4g2" event={"ID":"d16eba5e-1610-465b-b346-51692b4d7ad0","Type":"ContainerDied","Data":"a2bc2c5276597af02a79b1893b16ae346e02aadb312b446865673678f4fcc6cb"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.932837 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d77-account-create-update-pf4g2" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.932855 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2bc2c5276597af02a79b1893b16ae346e02aadb312b446865673678f4fcc6cb" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.935510 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-973c-account-create-update-vlmrw" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.935627 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-973c-account-create-update-vlmrw" event={"ID":"dc638766-f495-40bf-b04e-017d19ca3361","Type":"ContainerDied","Data":"bd2829968ec9542904eb54fc44ba00e13a68550cf5dc44cb206ac3289d086c49"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.935651 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd2829968ec9542904eb54fc44ba00e13a68550cf5dc44cb206ac3289d086c49" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.943243 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-csqpn" event={"ID":"e466ed3f-24cb-4a9a-9820-c4f5a31b7982","Type":"ContainerDied","Data":"c68256fbb994bf8571c2c570b7ea6db03e8fc0dc0ae6414cb86684b18620543b"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.943288 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c68256fbb994bf8571c2c570b7ea6db03e8fc0dc0ae6414cb86684b18620543b" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.943270 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-csqpn" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.947351 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-6ncxg" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.951309 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-99mrr" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.952469 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-6ncxg" event={"ID":"47a64870-144b-4e50-a338-4a10e39333d2","Type":"ContainerDied","Data":"5b545a41fcaf9c5b2378bdb959f5b6fd264dbf3464c4b8c7700e8a473fd5cf4c"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.952632 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-99mrr" event={"ID":"fcaf6f1b-c353-436b-aeb6-23344442588b","Type":"ContainerDied","Data":"1d578203ed50a8b53914544664beda9e52e3fd1841eb5ba59dae8d4a8a78aada"} Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.952662 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d578203ed50a8b53914544664beda9e52e3fd1841eb5ba59dae8d4a8a78aada" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.952799 5049 scope.go:117] "RemoveContainer" containerID="18a31fb37cd3e2dfb9da97f69a2ef54c149621f0405876f3cb7f425f48e3d989" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.956350 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-87m2z" podStartSLOduration=2.5374908229999997 podStartE2EDuration="6.956329338s" podCreationTimestamp="2026-01-27 17:17:13 +0000 UTC" firstStartedPulling="2026-01-27 17:17:14.823310846 +0000 UTC m=+1209.922284395" lastFinishedPulling="2026-01-27 17:17:19.242149361 +0000 UTC m=+1214.341122910" observedRunningTime="2026-01-27 17:17:19.953239259 +0000 UTC m=+1215.052212808" watchObservedRunningTime="2026-01-27 17:17:19.956329338 +0000 UTC m=+1215.055302887" Jan 27 17:17:19 crc kubenswrapper[5049]: I0127 17:17:19.991515 5049 scope.go:117] "RemoveContainer" containerID="7a0358aa2ff2d627a2df8479fcaa11e94076142f768005f18f7cdddd0ee9389c" Jan 27 17:17:20 crc kubenswrapper[5049]: I0127 17:17:20.002836 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6ncxg"] Jan 27 17:17:20 crc kubenswrapper[5049]: I0127 17:17:20.004887 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-6ncxg"] Jan 27 17:17:21 crc kubenswrapper[5049]: I0127 17:17:21.662062 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a64870-144b-4e50-a338-4a10e39333d2" path="/var/lib/kubelet/pods/47a64870-144b-4e50-a338-4a10e39333d2/volumes" Jan 27 17:17:22 crc kubenswrapper[5049]: I0127 17:17:22.987142 5049 generic.go:334] "Generic (PLEG): container finished" podID="56f1e2e4-7888-40f5-962c-2298aaa75d60" containerID="1a5011d1ce56fb586eae0db1f125d6527f67faabd3172ec43c0043152119152b" exitCode=0 Jan 27 17:17:22 crc kubenswrapper[5049]: I0127 17:17:22.987278 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87m2z" event={"ID":"56f1e2e4-7888-40f5-962c-2298aaa75d60","Type":"ContainerDied","Data":"1a5011d1ce56fb586eae0db1f125d6527f67faabd3172ec43c0043152119152b"} Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.373173 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.475625 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-combined-ca-bundle\") pod \"56f1e2e4-7888-40f5-962c-2298aaa75d60\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.475794 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-config-data\") pod \"56f1e2e4-7888-40f5-962c-2298aaa75d60\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.475905 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcktw\" (UniqueName: \"kubernetes.io/projected/56f1e2e4-7888-40f5-962c-2298aaa75d60-kube-api-access-lcktw\") pod \"56f1e2e4-7888-40f5-962c-2298aaa75d60\" (UID: \"56f1e2e4-7888-40f5-962c-2298aaa75d60\") " Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.483146 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f1e2e4-7888-40f5-962c-2298aaa75d60-kube-api-access-lcktw" (OuterVolumeSpecName: "kube-api-access-lcktw") pod "56f1e2e4-7888-40f5-962c-2298aaa75d60" (UID: "56f1e2e4-7888-40f5-962c-2298aaa75d60"). InnerVolumeSpecName "kube-api-access-lcktw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.501398 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56f1e2e4-7888-40f5-962c-2298aaa75d60" (UID: "56f1e2e4-7888-40f5-962c-2298aaa75d60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.542025 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-config-data" (OuterVolumeSpecName: "config-data") pod "56f1e2e4-7888-40f5-962c-2298aaa75d60" (UID: "56f1e2e4-7888-40f5-962c-2298aaa75d60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.578335 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.578388 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f1e2e4-7888-40f5-962c-2298aaa75d60-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:24 crc kubenswrapper[5049]: I0127 17:17:24.578403 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcktw\" (UniqueName: \"kubernetes.io/projected/56f1e2e4-7888-40f5-962c-2298aaa75d60-kube-api-access-lcktw\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.008119 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-87m2z" event={"ID":"56f1e2e4-7888-40f5-962c-2298aaa75d60","Type":"ContainerDied","Data":"924f7f0dd12748677e0dd2d15f2e5e982be799ba8db6e9a47cd668c243a9d42b"} Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.008152 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="924f7f0dd12748677e0dd2d15f2e5e982be799ba8db6e9a47cd668c243a9d42b" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.008189 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-87m2z" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.305732 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-kp4pj"] Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306326 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306341 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306363 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e466ed3f-24cb-4a9a-9820-c4f5a31b7982" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306372 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e466ed3f-24cb-4a9a-9820-c4f5a31b7982" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306388 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc638766-f495-40bf-b04e-017d19ca3361" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306396 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc638766-f495-40bf-b04e-017d19ca3361" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306408 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f1e2e4-7888-40f5-962c-2298aaa75d60" containerName="keystone-db-sync" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306414 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f1e2e4-7888-40f5-962c-2298aaa75d60" containerName="keystone-db-sync" Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306430 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a64870-144b-4e50-a338-4a10e39333d2" containerName="init" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306436 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a64870-144b-4e50-a338-4a10e39333d2" containerName="init" Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306443 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a64870-144b-4e50-a338-4a10e39333d2" containerName="dnsmasq-dns" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306449 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a64870-144b-4e50-a338-4a10e39333d2" containerName="dnsmasq-dns" Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306458 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4107b32c-cf40-4fe7-bd5b-00c00ff476f8" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306465 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4107b32c-cf40-4fe7-bd5b-00c00ff476f8" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306472 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcaf6f1b-c353-436b-aeb6-23344442588b" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306478 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcaf6f1b-c353-436b-aeb6-23344442588b" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: E0127 17:17:25.306491 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16eba5e-1610-465b-b346-51692b4d7ad0" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306497 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16eba5e-1610-465b-b346-51692b4d7ad0" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306642 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc638766-f495-40bf-b04e-017d19ca3361" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306656 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcaf6f1b-c353-436b-aeb6-23344442588b" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306672 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a64870-144b-4e50-a338-4a10e39333d2" containerName="dnsmasq-dns" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306722 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e466ed3f-24cb-4a9a-9820-c4f5a31b7982" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306732 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4107b32c-cf40-4fe7-bd5b-00c00ff476f8" containerName="mariadb-database-create" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306742 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16eba5e-1610-465b-b346-51692b4d7ad0" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306750 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3" containerName="mariadb-account-create-update" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.306760 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f1e2e4-7888-40f5-962c-2298aaa75d60" containerName="keystone-db-sync" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.307570 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.320039 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-kp4pj"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.343061 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ntb6x"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.344241 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.347747 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.347813 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.347902 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.348134 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.348291 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ztd82" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.370172 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ntb6x"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.422374 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.422485 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-config\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.422553 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xt6k\" (UniqueName: \"kubernetes.io/projected/125f6635-de8f-4c63-a271-c60f9e828d9c-kube-api-access-5xt6k\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.422596 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.422639 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.422672 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.507409 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.509356 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.511620 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.514287 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.522560 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.523826 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-config\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.523876 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-scripts\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.523920 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-fernet-keys\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.523967 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-credential-keys\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.523996 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xt6k\" (UniqueName: \"kubernetes.io/projected/125f6635-de8f-4c63-a271-c60f9e828d9c-kube-api-access-5xt6k\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524019 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-config-data\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524055 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524083 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9tv\" (UniqueName: \"kubernetes.io/projected/96e03d90-0aee-446d-bffa-836081fd1aaf-kube-api-access-mr9tv\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524126 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524154 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524178 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-combined-ca-bundle\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524234 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524555 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-config\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.524958 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.525099 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.525357 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.525621 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.542366 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xt6k\" (UniqueName: \"kubernetes.io/projected/125f6635-de8f-4c63-a271-c60f9e828d9c-kube-api-access-5xt6k\") pod \"dnsmasq-dns-847c4cc679-kp4pj\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.581232 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-4ftjm"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.582448 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.584200 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.595130 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-99bdp" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.595824 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4ftjm"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.606948 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9wgjf"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.607915 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.624262 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4gthl" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.624275 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.624593 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625143 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625178 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625215 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-scripts\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625250 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-fernet-keys\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625277 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-credential-keys\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625295 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-config-data\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625310 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625329 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625346 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-config-data\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625365 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9tv\" (UniqueName: \"kubernetes.io/projected/96e03d90-0aee-446d-bffa-836081fd1aaf-kube-api-access-mr9tv\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625399 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-combined-ca-bundle\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625421 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-scripts\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.625452 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h75z\" (UniqueName: \"kubernetes.io/projected/e7a08784-0e34-4e50-8cca-4f2845e7a11e-kube-api-access-9h75z\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.630646 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.637045 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-credential-keys\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.645573 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-scripts\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.649220 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-fernet-keys\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.661422 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-config-data\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.674096 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9tv\" (UniqueName: \"kubernetes.io/projected/96e03d90-0aee-446d-bffa-836081fd1aaf-kube-api-access-mr9tv\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.701682 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-combined-ca-bundle\") pod \"keystone-bootstrap-ntb6x\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.701998 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9wgjf"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.738879 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.738926 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-config\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.738951 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.738973 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-config-data\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739005 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-combined-ca-bundle\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739024 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2t2x\" (UniqueName: \"kubernetes.io/projected/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-kube-api-access-f2t2x\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739071 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-scripts\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739112 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h75z\" (UniqueName: \"kubernetes.io/projected/e7a08784-0e34-4e50-8cca-4f2845e7a11e-kube-api-access-9h75z\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739145 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-db-sync-config-data\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739167 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvxrj\" (UniqueName: \"kubernetes.io/projected/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-kube-api-access-kvxrj\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739192 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739222 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-combined-ca-bundle\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.739242 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.745723 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.753167 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.753500 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.754365 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-config-data\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.758712 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.759083 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-scripts\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.783865 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h75z\" (UniqueName: \"kubernetes.io/projected/e7a08784-0e34-4e50-8cca-4f2845e7a11e-kube-api-access-9h75z\") pod \"ceilometer-0\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.788700 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dqp4j"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.793113 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.803098 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.803299 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4klb7" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.803312 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.823329 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.826900 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dqp4j"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.843555 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hl7hg"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.844726 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.844766 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-config\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.844847 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-combined-ca-bundle\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.844884 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2t2x\" (UniqueName: \"kubernetes.io/projected/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-kube-api-access-f2t2x\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.845028 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-db-sync-config-data\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.845049 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvxrj\" (UniqueName: \"kubernetes.io/projected/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-kube-api-access-kvxrj\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.845100 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-combined-ca-bundle\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.850480 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-db-sync-config-data\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.850629 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.850949 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-l9ptw" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.851215 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.854417 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-config\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.856031 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-kp4pj"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.856989 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-combined-ca-bundle\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.857092 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-combined-ca-bundle\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.867493 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hl7hg"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.871643 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2t2x\" (UniqueName: \"kubernetes.io/projected/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-kube-api-access-f2t2x\") pod \"neutron-db-sync-9wgjf\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.875066 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvxrj\" (UniqueName: \"kubernetes.io/projected/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-kube-api-access-kvxrj\") pod \"barbican-db-sync-4ftjm\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.877339 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-shbs4"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.878921 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.889885 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-shbs4"] Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.913157 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946264 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e489f-aab0-40a8-b7ce-99febca8d8be-logs\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946319 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhhz\" (UniqueName: \"kubernetes.io/projected/bd627f49-e48d-4f81-a41c-3c753fdb27b3-kube-api-access-9bhhz\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946488 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-config-data\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946527 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqbc2\" (UniqueName: \"kubernetes.io/projected/032e489f-aab0-40a8-b7ce-99febca8d8be-kube-api-access-tqbc2\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946617 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-db-sync-config-data\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946672 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-scripts\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946725 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-combined-ca-bundle\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946742 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-config-data\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946838 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd627f49-e48d-4f81-a41c-3c753fdb27b3-etc-machine-id\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946893 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-scripts\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.946983 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-combined-ca-bundle\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.955816 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:17:25 crc kubenswrapper[5049]: I0127 17:17:25.971348 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.053512 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxsgb\" (UniqueName: \"kubernetes.io/projected/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-kube-api-access-qxsgb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.053566 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e489f-aab0-40a8-b7ce-99febca8d8be-logs\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.053601 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhhz\" (UniqueName: \"kubernetes.io/projected/bd627f49-e48d-4f81-a41c-3c753fdb27b3-kube-api-access-9bhhz\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.053852 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-config-data\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.053897 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqbc2\" (UniqueName: \"kubernetes.io/projected/032e489f-aab0-40a8-b7ce-99febca8d8be-kube-api-access-tqbc2\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.053999 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-db-sync-config-data\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054024 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e489f-aab0-40a8-b7ce-99febca8d8be-logs\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054044 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-scripts\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054081 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054109 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-combined-ca-bundle\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054131 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-config-data\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054209 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054245 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd627f49-e48d-4f81-a41c-3c753fdb27b3-etc-machine-id\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054279 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-config\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054309 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054335 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-scripts\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054379 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054448 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-combined-ca-bundle\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.054475 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd627f49-e48d-4f81-a41c-3c753fdb27b3-etc-machine-id\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.059326 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-scripts\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.059567 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-config-data\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.061110 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-combined-ca-bundle\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.061371 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-config-data\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.063347 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-combined-ca-bundle\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.068731 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-scripts\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.072606 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-db-sync-config-data\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.074547 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhhz\" (UniqueName: \"kubernetes.io/projected/bd627f49-e48d-4f81-a41c-3c753fdb27b3-kube-api-access-9bhhz\") pod \"cinder-db-sync-dqp4j\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.078855 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqbc2\" (UniqueName: \"kubernetes.io/projected/032e489f-aab0-40a8-b7ce-99febca8d8be-kube-api-access-tqbc2\") pod \"placement-db-sync-hl7hg\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.120070 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.155722 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.155869 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-config\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.155898 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.155925 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.155968 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxsgb\" (UniqueName: \"kubernetes.io/projected/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-kube-api-access-qxsgb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.156039 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.157222 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.157223 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-config\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.157801 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.158862 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.160184 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.180050 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.186472 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxsgb\" (UniqueName: \"kubernetes.io/projected/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-kube-api-access-qxsgb\") pod \"dnsmasq-dns-785d8bcb8c-shbs4\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.194045 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:26 crc kubenswrapper[5049]: I0127 17:17:26.269078 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-kp4pj"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.336172 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.418155 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.419377 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.428442 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.428610 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.429166 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.429285 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vv9mx" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.436130 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.506493 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.508042 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.513173 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.513329 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.529287 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.543491 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-4ftjm"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.565979 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.566090 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.566158 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-logs\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.566209 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.566433 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.566476 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.566526 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbvwl\" (UniqueName: \"kubernetes.io/projected/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-kube-api-access-cbvwl\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.566583 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.625182 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ntb6x"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.633988 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9wgjf"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.668781 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669156 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669190 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669212 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-logs\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669346 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669442 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-logs\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669491 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669522 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfwzc\" (UniqueName: \"kubernetes.io/projected/a83e43a6-724d-4351-a0ec-4f7dc48850d1-kube-api-access-rfwzc\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669617 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669702 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669775 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669858 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669903 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669946 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbvwl\" (UniqueName: \"kubernetes.io/projected/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-kube-api-access-cbvwl\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.669979 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.670005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-logs\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.670011 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.670302 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.670493 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.673610 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.674807 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-config-data\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.676592 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.678238 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-scripts\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.687389 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbvwl\" (UniqueName: \"kubernetes.io/projected/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-kube-api-access-cbvwl\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.708809 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.741634 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.774387 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.774472 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.774650 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.774718 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.774775 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-logs\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.774943 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfwzc\" (UniqueName: \"kubernetes.io/projected/a83e43a6-724d-4351-a0ec-4f7dc48850d1-kube-api-access-rfwzc\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.775083 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.775179 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.775248 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.776282 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.776946 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-logs\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.779519 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.781321 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.786247 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.788637 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.800146 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfwzc\" (UniqueName: \"kubernetes.io/projected/a83e43a6-724d-4351-a0ec-4f7dc48850d1-kube-api-access-rfwzc\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.811975 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:26.837553 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.037260 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9wgjf" event={"ID":"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7","Type":"ContainerStarted","Data":"a03fcd74978f09ca045dfe9c61b9f24cf5e346044f862b8fae04cbbf4b1c4ef2"} Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.037493 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9wgjf" event={"ID":"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7","Type":"ContainerStarted","Data":"50317188dc38bed51f45519d79fc487927684e4caed29375c4af8f21b487709b"} Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.039433 5049 generic.go:334] "Generic (PLEG): container finished" podID="125f6635-de8f-4c63-a271-c60f9e828d9c" containerID="aacfa4659d34b688a12738060634dc38514e00db9de93d2e92849505a246a250" exitCode=0 Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.039603 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" event={"ID":"125f6635-de8f-4c63-a271-c60f9e828d9c","Type":"ContainerDied","Data":"aacfa4659d34b688a12738060634dc38514e00db9de93d2e92849505a246a250"} Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.039623 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" event={"ID":"125f6635-de8f-4c63-a271-c60f9e828d9c","Type":"ContainerStarted","Data":"38a7de02c055d9b880b74fc1da332a48f3831eccbec78b6c31100e29bad7266b"} Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.041810 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ftjm" event={"ID":"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c","Type":"ContainerStarted","Data":"a7ccc5ef4c71b4b1943123f458233e4aebc59fb95b8a05841bfbb9a4db68c2bb"} Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.043159 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntb6x" event={"ID":"96e03d90-0aee-446d-bffa-836081fd1aaf","Type":"ContainerStarted","Data":"f9bed1843a91672f90172fb39a72ff66021d8df792a9d70084e5a6ea6b70cdc1"} Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.043182 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntb6x" event={"ID":"96e03d90-0aee-446d-bffa-836081fd1aaf","Type":"ContainerStarted","Data":"d250e49b447295ec27465c9ab25842c058c28c5c382f0a2b5073115fb4fee3e8"} Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.048851 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerStarted","Data":"7fc49c991d1d3ff8bcd31d5130e69fcf2081deb9fe64fd8dd798f0270e7bc850"} Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.082391 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ntb6x" podStartSLOduration=2.082369366 podStartE2EDuration="2.082369366s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:27.073850521 +0000 UTC m=+1222.172824080" watchObservedRunningTime="2026-01-27 17:17:27.082369366 +0000 UTC m=+1222.181342915" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.083972 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9wgjf" podStartSLOduration=2.083960912 podStartE2EDuration="2.083960912s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:27.057087467 +0000 UTC m=+1222.156061046" watchObservedRunningTime="2026-01-27 17:17:27.083960912 +0000 UTC m=+1222.182934471" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.405134 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dqp4j"] Jan 27 17:17:27 crc kubenswrapper[5049]: W0127 17:17:27.414116 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd627f49_e48d_4f81_a41c_3c753fdb27b3.slice/crio-86ce77af3fb279939f43f7c7da15fbf300fc74b87c8bc92557c9ba62e88d0d0b WatchSource:0}: Error finding container 86ce77af3fb279939f43f7c7da15fbf300fc74b87c8bc92557c9ba62e88d0d0b: Status 404 returned error can't find the container with id 86ce77af3fb279939f43f7c7da15fbf300fc74b87c8bc92557c9ba62e88d0d0b Jan 27 17:17:27 crc kubenswrapper[5049]: W0127 17:17:27.428588 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c4a9cea_a9c1_42e9_91a6_50302c39ac9e.slice/crio-8afdf0ab67810d5a6ff5be6717715290963495436b86d82176e0bd01f50e005a WatchSource:0}: Error finding container 8afdf0ab67810d5a6ff5be6717715290963495436b86d82176e0bd01f50e005a: Status 404 returned error can't find the container with id 8afdf0ab67810d5a6ff5be6717715290963495436b86d82176e0bd01f50e005a Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.429764 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-shbs4"] Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.455173 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hl7hg"] Jan 27 17:17:27 crc kubenswrapper[5049]: W0127 17:17:27.457065 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod032e489f_aab0_40a8_b7ce_99febca8d8be.slice/crio-20a00e21057e3a6ce9e69f699ebb1c5cf030f3ef38b5f8477fc6193d90cc0f73 WatchSource:0}: Error finding container 20a00e21057e3a6ce9e69f699ebb1c5cf030f3ef38b5f8477fc6193d90cc0f73: Status 404 returned error can't find the container with id 20a00e21057e3a6ce9e69f699ebb1c5cf030f3ef38b5f8477fc6193d90cc0f73 Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.693992 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.713726 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:27 crc kubenswrapper[5049]: W0127 17:17:27.732998 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda83e43a6_724d_4351_a0ec_4f7dc48850d1.slice/crio-d1aeccabb6ba553811899df8ecc7c7f029f1976869718921c8010390eacda0df WatchSource:0}: Error finding container d1aeccabb6ba553811899df8ecc7c7f029f1976869718921c8010390eacda0df: Status 404 returned error can't find the container with id d1aeccabb6ba553811899df8ecc7c7f029f1976869718921c8010390eacda0df Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.802077 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-sb\") pod \"125f6635-de8f-4c63-a271-c60f9e828d9c\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.802126 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-config\") pod \"125f6635-de8f-4c63-a271-c60f9e828d9c\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.802166 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-swift-storage-0\") pod \"125f6635-de8f-4c63-a271-c60f9e828d9c\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.802227 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-svc\") pod \"125f6635-de8f-4c63-a271-c60f9e828d9c\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.802352 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-nb\") pod \"125f6635-de8f-4c63-a271-c60f9e828d9c\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.802397 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xt6k\" (UniqueName: \"kubernetes.io/projected/125f6635-de8f-4c63-a271-c60f9e828d9c-kube-api-access-5xt6k\") pod \"125f6635-de8f-4c63-a271-c60f9e828d9c\" (UID: \"125f6635-de8f-4c63-a271-c60f9e828d9c\") " Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.811841 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125f6635-de8f-4c63-a271-c60f9e828d9c-kube-api-access-5xt6k" (OuterVolumeSpecName: "kube-api-access-5xt6k") pod "125f6635-de8f-4c63-a271-c60f9e828d9c" (UID: "125f6635-de8f-4c63-a271-c60f9e828d9c"). InnerVolumeSpecName "kube-api-access-5xt6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.853085 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "125f6635-de8f-4c63-a271-c60f9e828d9c" (UID: "125f6635-de8f-4c63-a271-c60f9e828d9c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.857381 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "125f6635-de8f-4c63-a271-c60f9e828d9c" (UID: "125f6635-de8f-4c63-a271-c60f9e828d9c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.860918 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "125f6635-de8f-4c63-a271-c60f9e828d9c" (UID: "125f6635-de8f-4c63-a271-c60f9e828d9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.868919 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "125f6635-de8f-4c63-a271-c60f9e828d9c" (UID: "125f6635-de8f-4c63-a271-c60f9e828d9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.880861 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-config" (OuterVolumeSpecName: "config") pod "125f6635-de8f-4c63-a271-c60f9e828d9c" (UID: "125f6635-de8f-4c63-a271-c60f9e828d9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.904873 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xt6k\" (UniqueName: \"kubernetes.io/projected/125f6635-de8f-4c63-a271-c60f9e828d9c-kube-api-access-5xt6k\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.904906 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.904915 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.904923 5049 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.904932 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:27 crc kubenswrapper[5049]: I0127 17:17:27.904939 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/125f6635-de8f-4c63-a271-c60f9e828d9c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.058854 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" event={"ID":"125f6635-de8f-4c63-a271-c60f9e828d9c","Type":"ContainerDied","Data":"38a7de02c055d9b880b74fc1da332a48f3831eccbec78b6c31100e29bad7266b"} Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.058940 5049 scope.go:117] "RemoveContainer" containerID="aacfa4659d34b688a12738060634dc38514e00db9de93d2e92849505a246a250" Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.059070 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-kp4pj" Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.069357 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dqp4j" event={"ID":"bd627f49-e48d-4f81-a41c-3c753fdb27b3","Type":"ContainerStarted","Data":"86ce77af3fb279939f43f7c7da15fbf300fc74b87c8bc92557c9ba62e88d0d0b"} Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.070958 5049 generic.go:334] "Generic (PLEG): container finished" podID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" containerID="c365c84b097b4fd100afd8736b2bccf2fd5ac7778d42563e702e2382ffd563b9" exitCode=0 Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.070997 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" event={"ID":"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e","Type":"ContainerDied","Data":"c365c84b097b4fd100afd8736b2bccf2fd5ac7778d42563e702e2382ffd563b9"} Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.071011 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" event={"ID":"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e","Type":"ContainerStarted","Data":"8afdf0ab67810d5a6ff5be6717715290963495436b86d82176e0bd01f50e005a"} Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.073832 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a83e43a6-724d-4351-a0ec-4f7dc48850d1","Type":"ContainerStarted","Data":"d1aeccabb6ba553811899df8ecc7c7f029f1976869718921c8010390eacda0df"} Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.102590 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hl7hg" event={"ID":"032e489f-aab0-40a8-b7ce-99febca8d8be","Type":"ContainerStarted","Data":"20a00e21057e3a6ce9e69f699ebb1c5cf030f3ef38b5f8477fc6193d90cc0f73"} Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.179748 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-kp4pj"] Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.190793 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-kp4pj"] Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.209531 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.283285 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.374293 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:17:28 crc kubenswrapper[5049]: I0127 17:17:28.487716 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:28 crc kubenswrapper[5049]: W0127 17:17:28.494533 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad98ce6f_0ee2_4d46_9d6a_935617491f5d.slice/crio-abc1a647b89ad2705f9f12f48fd8256add0a4e27ab0b580688f0ac6b677d4b7c WatchSource:0}: Error finding container abc1a647b89ad2705f9f12f48fd8256add0a4e27ab0b580688f0ac6b677d4b7c: Status 404 returned error can't find the container with id abc1a647b89ad2705f9f12f48fd8256add0a4e27ab0b580688f0ac6b677d4b7c Jan 27 17:17:29 crc kubenswrapper[5049]: I0127 17:17:29.145583 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" event={"ID":"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e","Type":"ContainerStarted","Data":"f74f8d86b5d917c3b3e9b8c0946231f3afed8fd1c67e75447ee4e5e4ddfec1fa"} Jan 27 17:17:29 crc kubenswrapper[5049]: I0127 17:17:29.145887 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:29 crc kubenswrapper[5049]: I0127 17:17:29.154695 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad98ce6f-0ee2-4d46-9d6a-935617491f5d","Type":"ContainerStarted","Data":"abc1a647b89ad2705f9f12f48fd8256add0a4e27ab0b580688f0ac6b677d4b7c"} Jan 27 17:17:29 crc kubenswrapper[5049]: I0127 17:17:29.158564 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a83e43a6-724d-4351-a0ec-4f7dc48850d1","Type":"ContainerStarted","Data":"5ad62f93fd5f08ecc541af184230b9153d343f1f885fa44aa69216e0b309c970"} Jan 27 17:17:29 crc kubenswrapper[5049]: I0127 17:17:29.659602 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125f6635-de8f-4c63-a271-c60f9e828d9c" path="/var/lib/kubelet/pods/125f6635-de8f-4c63-a271-c60f9e828d9c/volumes" Jan 27 17:17:30 crc kubenswrapper[5049]: I0127 17:17:30.182580 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad98ce6f-0ee2-4d46-9d6a-935617491f5d","Type":"ContainerStarted","Data":"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad"} Jan 27 17:17:30 crc kubenswrapper[5049]: I0127 17:17:30.193479 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a83e43a6-724d-4351-a0ec-4f7dc48850d1","Type":"ContainerStarted","Data":"d22cef3aa87549b8b0de8029d65692f7c3a0d23713cfa1dca4bd1090ae205316"} Jan 27 17:17:30 crc kubenswrapper[5049]: I0127 17:17:30.193844 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerName="glance-log" containerID="cri-o://5ad62f93fd5f08ecc541af184230b9153d343f1f885fa44aa69216e0b309c970" gracePeriod=30 Jan 27 17:17:30 crc kubenswrapper[5049]: I0127 17:17:30.193873 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerName="glance-httpd" containerID="cri-o://d22cef3aa87549b8b0de8029d65692f7c3a0d23713cfa1dca4bd1090ae205316" gracePeriod=30 Jan 27 17:17:30 crc kubenswrapper[5049]: I0127 17:17:30.215601 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" podStartSLOduration=5.215580605 podStartE2EDuration="5.215580605s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:29.17482257 +0000 UTC m=+1224.273796139" watchObservedRunningTime="2026-01-27 17:17:30.215580605 +0000 UTC m=+1225.314554164" Jan 27 17:17:30 crc kubenswrapper[5049]: I0127 17:17:30.218860 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.2188488490000005 podStartE2EDuration="5.218848849s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:30.214329589 +0000 UTC m=+1225.313303148" watchObservedRunningTime="2026-01-27 17:17:30.218848849 +0000 UTC m=+1225.317822398" Jan 27 17:17:31 crc kubenswrapper[5049]: I0127 17:17:31.220202 5049 generic.go:334] "Generic (PLEG): container finished" podID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerID="d22cef3aa87549b8b0de8029d65692f7c3a0d23713cfa1dca4bd1090ae205316" exitCode=0 Jan 27 17:17:31 crc kubenswrapper[5049]: I0127 17:17:31.220508 5049 generic.go:334] "Generic (PLEG): container finished" podID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerID="5ad62f93fd5f08ecc541af184230b9153d343f1f885fa44aa69216e0b309c970" exitCode=143 Jan 27 17:17:31 crc kubenswrapper[5049]: I0127 17:17:31.220277 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a83e43a6-724d-4351-a0ec-4f7dc48850d1","Type":"ContainerDied","Data":"d22cef3aa87549b8b0de8029d65692f7c3a0d23713cfa1dca4bd1090ae205316"} Jan 27 17:17:31 crc kubenswrapper[5049]: I0127 17:17:31.220561 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a83e43a6-724d-4351-a0ec-4f7dc48850d1","Type":"ContainerDied","Data":"5ad62f93fd5f08ecc541af184230b9153d343f1f885fa44aa69216e0b309c970"} Jan 27 17:17:32 crc kubenswrapper[5049]: I0127 17:17:32.230088 5049 generic.go:334] "Generic (PLEG): container finished" podID="96e03d90-0aee-446d-bffa-836081fd1aaf" containerID="f9bed1843a91672f90172fb39a72ff66021d8df792a9d70084e5a6ea6b70cdc1" exitCode=0 Jan 27 17:17:32 crc kubenswrapper[5049]: I0127 17:17:32.230127 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntb6x" event={"ID":"96e03d90-0aee-446d-bffa-836081fd1aaf","Type":"ContainerDied","Data":"f9bed1843a91672f90172fb39a72ff66021d8df792a9d70084e5a6ea6b70cdc1"} Jan 27 17:17:36 crc kubenswrapper[5049]: I0127 17:17:36.195841 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:17:36 crc kubenswrapper[5049]: I0127 17:17:36.264303 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wrndw"] Jan 27 17:17:36 crc kubenswrapper[5049]: I0127 17:17:36.264540 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="dnsmasq-dns" containerID="cri-o://843c7623ef79919c967dd028bf2b8986ba3d72e3eb1574bd51bbde06bbbf7480" gracePeriod=10 Jan 27 17:17:37 crc kubenswrapper[5049]: I0127 17:17:37.307463 5049 generic.go:334] "Generic (PLEG): container finished" podID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerID="843c7623ef79919c967dd028bf2b8986ba3d72e3eb1574bd51bbde06bbbf7480" exitCode=0 Jan 27 17:17:37 crc kubenswrapper[5049]: I0127 17:17:37.307530 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" event={"ID":"90cf84a1-03e1-46a5-96a1-8cddf4312669","Type":"ContainerDied","Data":"843c7623ef79919c967dd028bf2b8986ba3d72e3eb1574bd51bbde06bbbf7480"} Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.478073 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.804908 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.928839 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-internal-tls-certs\") pod \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.928882 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfwzc\" (UniqueName: \"kubernetes.io/projected/a83e43a6-724d-4351-a0ec-4f7dc48850d1-kube-api-access-rfwzc\") pod \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.928957 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-logs\") pod \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.928976 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-scripts\") pod \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.929049 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-combined-ca-bundle\") pod \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.929064 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.929112 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-httpd-run\") pod \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.929281 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-config-data\") pod \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\" (UID: \"a83e43a6-724d-4351-a0ec-4f7dc48850d1\") " Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.929407 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-logs" (OuterVolumeSpecName: "logs") pod "a83e43a6-724d-4351-a0ec-4f7dc48850d1" (UID: "a83e43a6-724d-4351-a0ec-4f7dc48850d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.929653 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.930243 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a83e43a6-724d-4351-a0ec-4f7dc48850d1" (UID: "a83e43a6-724d-4351-a0ec-4f7dc48850d1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.935762 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83e43a6-724d-4351-a0ec-4f7dc48850d1-kube-api-access-rfwzc" (OuterVolumeSpecName: "kube-api-access-rfwzc") pod "a83e43a6-724d-4351-a0ec-4f7dc48850d1" (UID: "a83e43a6-724d-4351-a0ec-4f7dc48850d1"). InnerVolumeSpecName "kube-api-access-rfwzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.936473 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-scripts" (OuterVolumeSpecName: "scripts") pod "a83e43a6-724d-4351-a0ec-4f7dc48850d1" (UID: "a83e43a6-724d-4351-a0ec-4f7dc48850d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.939612 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "a83e43a6-724d-4351-a0ec-4f7dc48850d1" (UID: "a83e43a6-724d-4351-a0ec-4f7dc48850d1"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.972826 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a83e43a6-724d-4351-a0ec-4f7dc48850d1" (UID: "a83e43a6-724d-4351-a0ec-4f7dc48850d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.980362 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-config-data" (OuterVolumeSpecName: "config-data") pod "a83e43a6-724d-4351-a0ec-4f7dc48850d1" (UID: "a83e43a6-724d-4351-a0ec-4f7dc48850d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:38 crc kubenswrapper[5049]: I0127 17:17:38.995712 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a83e43a6-724d-4351-a0ec-4f7dc48850d1" (UID: "a83e43a6-724d-4351-a0ec-4f7dc48850d1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.031009 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.031065 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.031078 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a83e43a6-724d-4351-a0ec-4f7dc48850d1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.031090 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.031099 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.031108 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfwzc\" (UniqueName: \"kubernetes.io/projected/a83e43a6-724d-4351-a0ec-4f7dc48850d1-kube-api-access-rfwzc\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.031117 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a83e43a6-724d-4351-a0ec-4f7dc48850d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.047982 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.132445 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.330798 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a83e43a6-724d-4351-a0ec-4f7dc48850d1","Type":"ContainerDied","Data":"d1aeccabb6ba553811899df8ecc7c7f029f1976869718921c8010390eacda0df"} Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.330833 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.330870 5049 scope.go:117] "RemoveContainer" containerID="d22cef3aa87549b8b0de8029d65692f7c3a0d23713cfa1dca4bd1090ae205316" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.366622 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.375987 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.387422 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:39 crc kubenswrapper[5049]: E0127 17:17:39.387820 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125f6635-de8f-4c63-a271-c60f9e828d9c" containerName="init" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.387835 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="125f6635-de8f-4c63-a271-c60f9e828d9c" containerName="init" Jan 27 17:17:39 crc kubenswrapper[5049]: E0127 17:17:39.387862 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerName="glance-httpd" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.387868 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerName="glance-httpd" Jan 27 17:17:39 crc kubenswrapper[5049]: E0127 17:17:39.387882 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerName="glance-log" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.387891 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerName="glance-log" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.388086 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="125f6635-de8f-4c63-a271-c60f9e828d9c" containerName="init" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.388102 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerName="glance-httpd" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.388113 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" containerName="glance-log" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.389130 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.393215 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.393244 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.402802 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.436158 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.436254 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.436432 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.436626 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67xgs\" (UniqueName: \"kubernetes.io/projected/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-kube-api-access-67xgs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.436718 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.436750 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.436770 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.436841 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.538749 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.540011 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.540186 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.540323 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67xgs\" (UniqueName: \"kubernetes.io/projected/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-kube-api-access-67xgs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.540434 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.540541 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.540651 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.540801 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.539284 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.541695 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.541845 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.546236 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.547789 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.553174 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.554502 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.556829 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67xgs\" (UniqueName: \"kubernetes.io/projected/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-kube-api-access-67xgs\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.571439 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.657789 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83e43a6-724d-4351-a0ec-4f7dc48850d1" path="/var/lib/kubelet/pods/a83e43a6-724d-4351-a0ec-4f7dc48850d1/volumes" Jan 27 17:17:39 crc kubenswrapper[5049]: I0127 17:17:39.712217 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:43 crc kubenswrapper[5049]: I0127 17:17:43.478085 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.656572 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.684776 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-config-data\") pod \"96e03d90-0aee-446d-bffa-836081fd1aaf\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.684822 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-combined-ca-bundle\") pod \"96e03d90-0aee-446d-bffa-836081fd1aaf\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.684849 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-credential-keys\") pod \"96e03d90-0aee-446d-bffa-836081fd1aaf\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.684888 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr9tv\" (UniqueName: \"kubernetes.io/projected/96e03d90-0aee-446d-bffa-836081fd1aaf-kube-api-access-mr9tv\") pod \"96e03d90-0aee-446d-bffa-836081fd1aaf\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.684942 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-scripts\") pod \"96e03d90-0aee-446d-bffa-836081fd1aaf\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.684977 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-fernet-keys\") pod \"96e03d90-0aee-446d-bffa-836081fd1aaf\" (UID: \"96e03d90-0aee-446d-bffa-836081fd1aaf\") " Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.692809 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "96e03d90-0aee-446d-bffa-836081fd1aaf" (UID: "96e03d90-0aee-446d-bffa-836081fd1aaf"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.693450 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-scripts" (OuterVolumeSpecName: "scripts") pod "96e03d90-0aee-446d-bffa-836081fd1aaf" (UID: "96e03d90-0aee-446d-bffa-836081fd1aaf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.694513 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e03d90-0aee-446d-bffa-836081fd1aaf-kube-api-access-mr9tv" (OuterVolumeSpecName: "kube-api-access-mr9tv") pod "96e03d90-0aee-446d-bffa-836081fd1aaf" (UID: "96e03d90-0aee-446d-bffa-836081fd1aaf"). InnerVolumeSpecName "kube-api-access-mr9tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.694763 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "96e03d90-0aee-446d-bffa-836081fd1aaf" (UID: "96e03d90-0aee-446d-bffa-836081fd1aaf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.719082 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96e03d90-0aee-446d-bffa-836081fd1aaf" (UID: "96e03d90-0aee-446d-bffa-836081fd1aaf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.720353 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-config-data" (OuterVolumeSpecName: "config-data") pod "96e03d90-0aee-446d-bffa-836081fd1aaf" (UID: "96e03d90-0aee-446d-bffa-836081fd1aaf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.782070 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.782121 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.792369 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.792405 5049 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.792421 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.792436 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.792453 5049 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/96e03d90-0aee-446d-bffa-836081fd1aaf-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:47 crc kubenswrapper[5049]: I0127 17:17:47.792469 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr9tv\" (UniqueName: \"kubernetes.io/projected/96e03d90-0aee-446d-bffa-836081fd1aaf-kube-api-access-mr9tv\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.412141 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntb6x" event={"ID":"96e03d90-0aee-446d-bffa-836081fd1aaf","Type":"ContainerDied","Data":"d250e49b447295ec27465c9ab25842c058c28c5c382f0a2b5073115fb4fee3e8"} Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.412177 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d250e49b447295ec27465c9ab25842c058c28c5c382f0a2b5073115fb4fee3e8" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.412208 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntb6x" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.838008 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ntb6x"] Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.850649 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ntb6x"] Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.950657 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xrn5m"] Jan 27 17:17:48 crc kubenswrapper[5049]: E0127 17:17:48.951388 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e03d90-0aee-446d-bffa-836081fd1aaf" containerName="keystone-bootstrap" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.951415 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e03d90-0aee-446d-bffa-836081fd1aaf" containerName="keystone-bootstrap" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.951625 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e03d90-0aee-446d-bffa-836081fd1aaf" containerName="keystone-bootstrap" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.952560 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.954916 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.955201 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ztd82" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.955371 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.955546 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.961932 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.963763 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xrn5m"] Jan 27 17:17:48 crc kubenswrapper[5049]: I0127 17:17:48.974938 5049 scope.go:117] "RemoveContainer" containerID="5ad62f93fd5f08ecc541af184230b9153d343f1f885fa44aa69216e0b309c970" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.020575 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-combined-ca-bundle\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.020697 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-scripts\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.020815 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbwcn\" (UniqueName: \"kubernetes.io/projected/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-kube-api-access-dbwcn\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.020914 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-config-data\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.020940 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-credential-keys\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.021057 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-fernet-keys\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.123580 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-scripts\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.123641 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbwcn\" (UniqueName: \"kubernetes.io/projected/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-kube-api-access-dbwcn\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.123745 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-config-data\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.123770 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-credential-keys\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.123809 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-fernet-keys\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.123845 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-combined-ca-bundle\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.129361 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-scripts\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.130495 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-combined-ca-bundle\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.131844 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-config-data\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.137050 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-credential-keys\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.137196 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-fernet-keys\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.141554 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbwcn\" (UniqueName: \"kubernetes.io/projected/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-kube-api-access-dbwcn\") pod \"keystone-bootstrap-xrn5m\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: E0127 17:17:49.165783 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 17:17:49 crc kubenswrapper[5049]: E0127 17:17:49.165946 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bhhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dqp4j_openstack(bd627f49-e48d-4f81-a41c-3c753fdb27b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 17:17:49 crc kubenswrapper[5049]: E0127 17:17:49.170117 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dqp4j" podUID="bd627f49-e48d-4f81-a41c-3c753fdb27b3" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.402870 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.427852 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-config\") pod \"90cf84a1-03e1-46a5-96a1-8cddf4312669\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.427937 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmtcz\" (UniqueName: \"kubernetes.io/projected/90cf84a1-03e1-46a5-96a1-8cddf4312669-kube-api-access-kmtcz\") pod \"90cf84a1-03e1-46a5-96a1-8cddf4312669\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.427966 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-swift-storage-0\") pod \"90cf84a1-03e1-46a5-96a1-8cddf4312669\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.428061 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-nb\") pod \"90cf84a1-03e1-46a5-96a1-8cddf4312669\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.428098 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-svc\") pod \"90cf84a1-03e1-46a5-96a1-8cddf4312669\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.428123 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-sb\") pod \"90cf84a1-03e1-46a5-96a1-8cddf4312669\" (UID: \"90cf84a1-03e1-46a5-96a1-8cddf4312669\") " Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.440009 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.444960 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90cf84a1-03e1-46a5-96a1-8cddf4312669-kube-api-access-kmtcz" (OuterVolumeSpecName: "kube-api-access-kmtcz") pod "90cf84a1-03e1-46a5-96a1-8cddf4312669" (UID: "90cf84a1-03e1-46a5-96a1-8cddf4312669"). InnerVolumeSpecName "kube-api-access-kmtcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.446072 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" event={"ID":"90cf84a1-03e1-46a5-96a1-8cddf4312669","Type":"ContainerDied","Data":"cdf132c74a722dedf4d49bf7eded8266fb980bef35fd6162ed61624642291f51"} Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.446147 5049 scope.go:117] "RemoveContainer" containerID="843c7623ef79919c967dd028bf2b8986ba3d72e3eb1574bd51bbde06bbbf7480" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.446310 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" Jan 27 17:17:49 crc kubenswrapper[5049]: E0127 17:17:49.452959 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-dqp4j" podUID="bd627f49-e48d-4f81-a41c-3c753fdb27b3" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.490618 5049 scope.go:117] "RemoveContainer" containerID="a09c8be9e60e8da8466e088d675944d29a3b1e004e1b8ce8a001af4f420ad9cc" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.530723 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.530820 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmtcz\" (UniqueName: \"kubernetes.io/projected/90cf84a1-03e1-46a5-96a1-8cddf4312669-kube-api-access-kmtcz\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:49 crc kubenswrapper[5049]: W0127 17:17:49.541225 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda06b8e7e_7f19_47be_999f_dd2db1f6a2ce.slice/crio-acd6a38fc81c7978ae3553378ff5774849147dfe71bf8d18d72efffd01dea382 WatchSource:0}: Error finding container acd6a38fc81c7978ae3553378ff5774849147dfe71bf8d18d72efffd01dea382: Status 404 returned error can't find the container with id acd6a38fc81c7978ae3553378ff5774849147dfe71bf8d18d72efffd01dea382 Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.637521 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90cf84a1-03e1-46a5-96a1-8cddf4312669" (UID: "90cf84a1-03e1-46a5-96a1-8cddf4312669"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.650572 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90cf84a1-03e1-46a5-96a1-8cddf4312669" (UID: "90cf84a1-03e1-46a5-96a1-8cddf4312669"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.664010 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90cf84a1-03e1-46a5-96a1-8cddf4312669" (UID: "90cf84a1-03e1-46a5-96a1-8cddf4312669"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.665263 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e03d90-0aee-446d-bffa-836081fd1aaf" path="/var/lib/kubelet/pods/96e03d90-0aee-446d-bffa-836081fd1aaf/volumes" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.687137 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-config" (OuterVolumeSpecName: "config") pod "90cf84a1-03e1-46a5-96a1-8cddf4312669" (UID: "90cf84a1-03e1-46a5-96a1-8cddf4312669"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.690163 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90cf84a1-03e1-46a5-96a1-8cddf4312669" (UID: "90cf84a1-03e1-46a5-96a1-8cddf4312669"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.735280 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.735316 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.735528 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.736559 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.736574 5049 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90cf84a1-03e1-46a5-96a1-8cddf4312669-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.779832 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wrndw"] Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.786991 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wrndw"] Jan 27 17:17:49 crc kubenswrapper[5049]: I0127 17:17:49.958470 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xrn5m"] Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.461518 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hl7hg" event={"ID":"032e489f-aab0-40a8-b7ce-99febca8d8be","Type":"ContainerStarted","Data":"bf4496e8b75d16d17c6412080add0938fc8b015c3af6283bf184fae15703af44"} Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.465418 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce","Type":"ContainerStarted","Data":"5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d"} Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.465453 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce","Type":"ContainerStarted","Data":"acd6a38fc81c7978ae3553378ff5774849147dfe71bf8d18d72efffd01dea382"} Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.467181 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerStarted","Data":"6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66"} Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.470989 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ftjm" event={"ID":"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c","Type":"ContainerStarted","Data":"ae0d327447843e6e5818c34d84bfad4757042611fef3894efd400ad9be445ea2"} Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.473915 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad98ce6f-0ee2-4d46-9d6a-935617491f5d","Type":"ContainerStarted","Data":"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252"} Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.474059 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerName="glance-log" containerID="cri-o://9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad" gracePeriod=30 Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.474405 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerName="glance-httpd" containerID="cri-o://a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252" gracePeriod=30 Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.479819 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xrn5m" event={"ID":"b4ec2c4a-f699-4b6e-b9f4-90f83647167e","Type":"ContainerStarted","Data":"8832f0497f6317088c066f86d39c6ff6515783e0965d4828a799b9a8e0dc9357"} Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.479875 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xrn5m" event={"ID":"b4ec2c4a-f699-4b6e-b9f4-90f83647167e","Type":"ContainerStarted","Data":"dac1a356186e8f99940a4ca1c7376d24697f241972931115c679d030b70b2451"} Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.491699 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hl7hg" podStartSLOduration=3.981671742 podStartE2EDuration="25.491661938s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="2026-01-27 17:17:27.467890264 +0000 UTC m=+1222.566863813" lastFinishedPulling="2026-01-27 17:17:48.97788045 +0000 UTC m=+1244.076854009" observedRunningTime="2026-01-27 17:17:50.490858865 +0000 UTC m=+1245.589832414" watchObservedRunningTime="2026-01-27 17:17:50.491661938 +0000 UTC m=+1245.590635497" Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.517063 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-4ftjm" podStartSLOduration=3.07368435 podStartE2EDuration="25.517041014s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="2026-01-27 17:17:26.53638284 +0000 UTC m=+1221.635356389" lastFinishedPulling="2026-01-27 17:17:48.979739504 +0000 UTC m=+1244.078713053" observedRunningTime="2026-01-27 17:17:50.506828722 +0000 UTC m=+1245.605802271" watchObservedRunningTime="2026-01-27 17:17:50.517041014 +0000 UTC m=+1245.616014563" Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.579625 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xrn5m" podStartSLOduration=2.579604353 podStartE2EDuration="2.579604353s" podCreationTimestamp="2026-01-27 17:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:50.547950368 +0000 UTC m=+1245.646923917" watchObservedRunningTime="2026-01-27 17:17:50.579604353 +0000 UTC m=+1245.678577892" Jan 27 17:17:50 crc kubenswrapper[5049]: I0127 17:17:50.580831 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=25.580824678 podStartE2EDuration="25.580824678s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:50.572371456 +0000 UTC m=+1245.671345005" watchObservedRunningTime="2026-01-27 17:17:50.580824678 +0000 UTC m=+1245.679798227" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.390716 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472089 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbvwl\" (UniqueName: \"kubernetes.io/projected/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-kube-api-access-cbvwl\") pod \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472204 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-public-tls-certs\") pod \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472252 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-combined-ca-bundle\") pod \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472289 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-httpd-run\") pod \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472374 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-config-data\") pod \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472409 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-scripts\") pod \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472446 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472489 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-logs\") pod \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\" (UID: \"ad98ce6f-0ee2-4d46-9d6a-935617491f5d\") " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.472785 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ad98ce6f-0ee2-4d46-9d6a-935617491f5d" (UID: "ad98ce6f-0ee2-4d46-9d6a-935617491f5d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.473129 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.473496 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-logs" (OuterVolumeSpecName: "logs") pod "ad98ce6f-0ee2-4d46-9d6a-935617491f5d" (UID: "ad98ce6f-0ee2-4d46-9d6a-935617491f5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.478152 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "ad98ce6f-0ee2-4d46-9d6a-935617491f5d" (UID: "ad98ce6f-0ee2-4d46-9d6a-935617491f5d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.478337 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-kube-api-access-cbvwl" (OuterVolumeSpecName: "kube-api-access-cbvwl") pod "ad98ce6f-0ee2-4d46-9d6a-935617491f5d" (UID: "ad98ce6f-0ee2-4d46-9d6a-935617491f5d"). InnerVolumeSpecName "kube-api-access-cbvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.479051 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-scripts" (OuterVolumeSpecName: "scripts") pod "ad98ce6f-0ee2-4d46-9d6a-935617491f5d" (UID: "ad98ce6f-0ee2-4d46-9d6a-935617491f5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.490544 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce","Type":"ContainerStarted","Data":"69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9"} Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.496641 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerStarted","Data":"200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4"} Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.501759 5049 generic.go:334] "Generic (PLEG): container finished" podID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerID="a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252" exitCode=0 Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.501796 5049 generic.go:334] "Generic (PLEG): container finished" podID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerID="9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad" exitCode=143 Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.501843 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.501903 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad98ce6f-0ee2-4d46-9d6a-935617491f5d","Type":"ContainerDied","Data":"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252"} Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.501951 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad98ce6f-0ee2-4d46-9d6a-935617491f5d","Type":"ContainerDied","Data":"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad"} Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.501966 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ad98ce6f-0ee2-4d46-9d6a-935617491f5d","Type":"ContainerDied","Data":"abc1a647b89ad2705f9f12f48fd8256add0a4e27ab0b580688f0ac6b677d4b7c"} Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.501984 5049 scope.go:117] "RemoveContainer" containerID="a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.510476 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad98ce6f-0ee2-4d46-9d6a-935617491f5d" (UID: "ad98ce6f-0ee2-4d46-9d6a-935617491f5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.520015 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.519989875 podStartE2EDuration="12.519989875s" podCreationTimestamp="2026-01-27 17:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:51.51456758 +0000 UTC m=+1246.613541149" watchObservedRunningTime="2026-01-27 17:17:51.519989875 +0000 UTC m=+1246.618963424" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.533122 5049 scope.go:117] "RemoveContainer" containerID="9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.546405 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ad98ce6f-0ee2-4d46-9d6a-935617491f5d" (UID: "ad98ce6f-0ee2-4d46-9d6a-935617491f5d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.548755 5049 scope.go:117] "RemoveContainer" containerID="a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252" Jan 27 17:17:51 crc kubenswrapper[5049]: E0127 17:17:51.549181 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252\": container with ID starting with a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252 not found: ID does not exist" containerID="a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.549215 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252"} err="failed to get container status \"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252\": rpc error: code = NotFound desc = could not find container \"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252\": container with ID starting with a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252 not found: ID does not exist" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.549235 5049 scope.go:117] "RemoveContainer" containerID="9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad" Jan 27 17:17:51 crc kubenswrapper[5049]: E0127 17:17:51.549536 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad\": container with ID starting with 9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad not found: ID does not exist" containerID="9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.549583 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad"} err="failed to get container status \"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad\": rpc error: code = NotFound desc = could not find container \"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad\": container with ID starting with 9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad not found: ID does not exist" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.549615 5049 scope.go:117] "RemoveContainer" containerID="a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.549910 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252"} err="failed to get container status \"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252\": rpc error: code = NotFound desc = could not find container \"a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252\": container with ID starting with a2518069d11aa8498744c5c5eec9065e298a1fe0ce126974bc9298df50b73252 not found: ID does not exist" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.549934 5049 scope.go:117] "RemoveContainer" containerID="9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.550184 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad"} err="failed to get container status \"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad\": rpc error: code = NotFound desc = could not find container \"9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad\": container with ID starting with 9623cb0c09900306a2636176092cf1f68a67e08e43e2c57d9932358e067e26ad not found: ID does not exist" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.566536 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-config-data" (OuterVolumeSpecName: "config-data") pod "ad98ce6f-0ee2-4d46-9d6a-935617491f5d" (UID: "ad98ce6f-0ee2-4d46-9d6a-935617491f5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.575148 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbvwl\" (UniqueName: \"kubernetes.io/projected/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-kube-api-access-cbvwl\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.575188 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.575204 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.575216 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.575229 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.575253 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.575267 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad98ce6f-0ee2-4d46-9d6a-935617491f5d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.600339 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.660410 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" path="/var/lib/kubelet/pods/90cf84a1-03e1-46a5-96a1-8cddf4312669/volumes" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.676919 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.852188 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.864924 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.887017 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:51 crc kubenswrapper[5049]: E0127 17:17:51.887442 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerName="glance-httpd" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.887467 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerName="glance-httpd" Jan 27 17:17:51 crc kubenswrapper[5049]: E0127 17:17:51.887497 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="init" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.887507 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="init" Jan 27 17:17:51 crc kubenswrapper[5049]: E0127 17:17:51.887530 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="dnsmasq-dns" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.887538 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="dnsmasq-dns" Jan 27 17:17:51 crc kubenswrapper[5049]: E0127 17:17:51.887557 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerName="glance-log" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.887566 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerName="glance-log" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.887804 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerName="glance-log" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.887829 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" containerName="glance-httpd" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.887840 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="dnsmasq-dns" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.889494 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.892609 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.899925 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.905610 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.982125 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.982184 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kw45\" (UniqueName: \"kubernetes.io/projected/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-kube-api-access-5kw45\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.982448 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-scripts\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.982496 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.982522 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-logs\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.982598 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.982704 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:51 crc kubenswrapper[5049]: I0127 17:17:51.982804 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-config-data\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.084463 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.084593 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.084684 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-config-data\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.084755 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.084788 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kw45\" (UniqueName: \"kubernetes.io/projected/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-kube-api-access-5kw45\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.085052 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.085598 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.085693 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-scripts\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.085727 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.085764 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-logs\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.086124 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-logs\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.090168 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.100219 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-config-data\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.104272 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-scripts\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.105011 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.117445 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kw45\" (UniqueName: \"kubernetes.io/projected/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-kube-api-access-5kw45\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.117499 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.207964 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:17:52 crc kubenswrapper[5049]: I0127 17:17:52.808222 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:17:52 crc kubenswrapper[5049]: W0127 17:17:52.811960 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bc08fc1_cb54_4c3f_888d_89c9ea303a80.slice/crio-7222092dfb0a029a189b9e8c425ce40815848726660d21d395ac5da331ee3327 WatchSource:0}: Error finding container 7222092dfb0a029a189b9e8c425ce40815848726660d21d395ac5da331ee3327: Status 404 returned error can't find the container with id 7222092dfb0a029a189b9e8c425ce40815848726660d21d395ac5da331ee3327 Jan 27 17:17:53 crc kubenswrapper[5049]: I0127 17:17:53.478931 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-wrndw" podUID="90cf84a1-03e1-46a5-96a1-8cddf4312669" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 27 17:17:53 crc kubenswrapper[5049]: I0127 17:17:53.525048 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5bc08fc1-cb54-4c3f-888d-89c9ea303a80","Type":"ContainerStarted","Data":"f458b871a2c65ece7609fd14e073bf2f348d244190d8335cdf4a8b6fa65b2442"} Jan 27 17:17:53 crc kubenswrapper[5049]: I0127 17:17:53.525105 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5bc08fc1-cb54-4c3f-888d-89c9ea303a80","Type":"ContainerStarted","Data":"7222092dfb0a029a189b9e8c425ce40815848726660d21d395ac5da331ee3327"} Jan 27 17:17:53 crc kubenswrapper[5049]: I0127 17:17:53.527011 5049 generic.go:334] "Generic (PLEG): container finished" podID="032e489f-aab0-40a8-b7ce-99febca8d8be" containerID="bf4496e8b75d16d17c6412080add0938fc8b015c3af6283bf184fae15703af44" exitCode=0 Jan 27 17:17:53 crc kubenswrapper[5049]: I0127 17:17:53.527054 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hl7hg" event={"ID":"032e489f-aab0-40a8-b7ce-99febca8d8be","Type":"ContainerDied","Data":"bf4496e8b75d16d17c6412080add0938fc8b015c3af6283bf184fae15703af44"} Jan 27 17:17:53 crc kubenswrapper[5049]: I0127 17:17:53.674238 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad98ce6f-0ee2-4d46-9d6a-935617491f5d" path="/var/lib/kubelet/pods/ad98ce6f-0ee2-4d46-9d6a-935617491f5d/volumes" Jan 27 17:17:53 crc kubenswrapper[5049]: E0127 17:17:53.955827 5049 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4ec2c4a_f699_4b6e_b9f4_90f83647167e.slice/crio-conmon-8832f0497f6317088c066f86d39c6ff6515783e0965d4828a799b9a8e0dc9357.scope\": RecentStats: unable to find data in memory cache]" Jan 27 17:17:54 crc kubenswrapper[5049]: I0127 17:17:54.537001 5049 generic.go:334] "Generic (PLEG): container finished" podID="b4ec2c4a-f699-4b6e-b9f4-90f83647167e" containerID="8832f0497f6317088c066f86d39c6ff6515783e0965d4828a799b9a8e0dc9357" exitCode=0 Jan 27 17:17:54 crc kubenswrapper[5049]: I0127 17:17:54.537078 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xrn5m" event={"ID":"b4ec2c4a-f699-4b6e-b9f4-90f83647167e","Type":"ContainerDied","Data":"8832f0497f6317088c066f86d39c6ff6515783e0965d4828a799b9a8e0dc9357"} Jan 27 17:17:54 crc kubenswrapper[5049]: I0127 17:17:54.538837 5049 generic.go:334] "Generic (PLEG): container finished" podID="0ebf7681-25b8-4db9-a7b2-86fca3ddc37c" containerID="ae0d327447843e6e5818c34d84bfad4757042611fef3894efd400ad9be445ea2" exitCode=0 Jan 27 17:17:54 crc kubenswrapper[5049]: I0127 17:17:54.538895 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ftjm" event={"ID":"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c","Type":"ContainerDied","Data":"ae0d327447843e6e5818c34d84bfad4757042611fef3894efd400ad9be445ea2"} Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.277399 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.351355 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.358054 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.400383 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-config-data\") pod \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.400450 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-combined-ca-bundle\") pod \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401042 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-combined-ca-bundle\") pod \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401079 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-scripts\") pod \"032e489f-aab0-40a8-b7ce-99febca8d8be\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401103 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-combined-ca-bundle\") pod \"032e489f-aab0-40a8-b7ce-99febca8d8be\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401145 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbwcn\" (UniqueName: \"kubernetes.io/projected/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-kube-api-access-dbwcn\") pod \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401198 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-fernet-keys\") pod \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401233 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvxrj\" (UniqueName: \"kubernetes.io/projected/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-kube-api-access-kvxrj\") pod \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401279 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-config-data\") pod \"032e489f-aab0-40a8-b7ce-99febca8d8be\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401341 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e489f-aab0-40a8-b7ce-99febca8d8be-logs\") pod \"032e489f-aab0-40a8-b7ce-99febca8d8be\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401415 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-scripts\") pod \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.401443 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-credential-keys\") pod \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\" (UID: \"b4ec2c4a-f699-4b6e-b9f4-90f83647167e\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.402113 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/032e489f-aab0-40a8-b7ce-99febca8d8be-logs" (OuterVolumeSpecName: "logs") pod "032e489f-aab0-40a8-b7ce-99febca8d8be" (UID: "032e489f-aab0-40a8-b7ce-99febca8d8be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.405269 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b4ec2c4a-f699-4b6e-b9f4-90f83647167e" (UID: "b4ec2c4a-f699-4b6e-b9f4-90f83647167e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.406412 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-kube-api-access-dbwcn" (OuterVolumeSpecName: "kube-api-access-dbwcn") pod "b4ec2c4a-f699-4b6e-b9f4-90f83647167e" (UID: "b4ec2c4a-f699-4b6e-b9f4-90f83647167e"). InnerVolumeSpecName "kube-api-access-dbwcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.406538 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-scripts" (OuterVolumeSpecName: "scripts") pod "032e489f-aab0-40a8-b7ce-99febca8d8be" (UID: "032e489f-aab0-40a8-b7ce-99febca8d8be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.407653 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-scripts" (OuterVolumeSpecName: "scripts") pod "b4ec2c4a-f699-4b6e-b9f4-90f83647167e" (UID: "b4ec2c4a-f699-4b6e-b9f4-90f83647167e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.409145 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-kube-api-access-kvxrj" (OuterVolumeSpecName: "kube-api-access-kvxrj") pod "0ebf7681-25b8-4db9-a7b2-86fca3ddc37c" (UID: "0ebf7681-25b8-4db9-a7b2-86fca3ddc37c"). InnerVolumeSpecName "kube-api-access-kvxrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.409272 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b4ec2c4a-f699-4b6e-b9f4-90f83647167e" (UID: "b4ec2c4a-f699-4b6e-b9f4-90f83647167e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.424622 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-config-data" (OuterVolumeSpecName: "config-data") pod "b4ec2c4a-f699-4b6e-b9f4-90f83647167e" (UID: "b4ec2c4a-f699-4b6e-b9f4-90f83647167e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.431117 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4ec2c4a-f699-4b6e-b9f4-90f83647167e" (UID: "b4ec2c4a-f699-4b6e-b9f4-90f83647167e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.431320 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-config-data" (OuterVolumeSpecName: "config-data") pod "032e489f-aab0-40a8-b7ce-99febca8d8be" (UID: "032e489f-aab0-40a8-b7ce-99febca8d8be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.434221 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ebf7681-25b8-4db9-a7b2-86fca3ddc37c" (UID: "0ebf7681-25b8-4db9-a7b2-86fca3ddc37c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.437820 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "032e489f-aab0-40a8-b7ce-99febca8d8be" (UID: "032e489f-aab0-40a8-b7ce-99febca8d8be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502566 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqbc2\" (UniqueName: \"kubernetes.io/projected/032e489f-aab0-40a8-b7ce-99febca8d8be-kube-api-access-tqbc2\") pod \"032e489f-aab0-40a8-b7ce-99febca8d8be\" (UID: \"032e489f-aab0-40a8-b7ce-99febca8d8be\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502615 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-db-sync-config-data\") pod \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\" (UID: \"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c\") " Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502864 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502880 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502890 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502898 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502907 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502916 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbwcn\" (UniqueName: \"kubernetes.io/projected/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-kube-api-access-dbwcn\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502923 5049 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502933 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvxrj\" (UniqueName: \"kubernetes.io/projected/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-kube-api-access-kvxrj\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502942 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/032e489f-aab0-40a8-b7ce-99febca8d8be-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502951 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/032e489f-aab0-40a8-b7ce-99febca8d8be-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502958 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.502965 5049 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b4ec2c4a-f699-4b6e-b9f4-90f83647167e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.505418 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0ebf7681-25b8-4db9-a7b2-86fca3ddc37c" (UID: "0ebf7681-25b8-4db9-a7b2-86fca3ddc37c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.505893 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032e489f-aab0-40a8-b7ce-99febca8d8be-kube-api-access-tqbc2" (OuterVolumeSpecName: "kube-api-access-tqbc2") pod "032e489f-aab0-40a8-b7ce-99febca8d8be" (UID: "032e489f-aab0-40a8-b7ce-99febca8d8be"). InnerVolumeSpecName "kube-api-access-tqbc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.557331 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerStarted","Data":"00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe"} Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.558869 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-4ftjm" event={"ID":"0ebf7681-25b8-4db9-a7b2-86fca3ddc37c","Type":"ContainerDied","Data":"a7ccc5ef4c71b4b1943123f458233e4aebc59fb95b8a05841bfbb9a4db68c2bb"} Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.558893 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ccc5ef4c71b4b1943123f458233e4aebc59fb95b8a05841bfbb9a4db68c2bb" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.558951 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-4ftjm" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.568847 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xrn5m" event={"ID":"b4ec2c4a-f699-4b6e-b9f4-90f83647167e","Type":"ContainerDied","Data":"dac1a356186e8f99940a4ca1c7376d24697f241972931115c679d030b70b2451"} Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.568874 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac1a356186e8f99940a4ca1c7376d24697f241972931115c679d030b70b2451" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.568912 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xrn5m" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.578054 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hl7hg" event={"ID":"032e489f-aab0-40a8-b7ce-99febca8d8be","Type":"ContainerDied","Data":"20a00e21057e3a6ce9e69f699ebb1c5cf030f3ef38b5f8477fc6193d90cc0f73"} Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.578117 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20a00e21057e3a6ce9e69f699ebb1c5cf030f3ef38b5f8477fc6193d90cc0f73" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.578223 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hl7hg" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.605583 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqbc2\" (UniqueName: \"kubernetes.io/projected/032e489f-aab0-40a8-b7ce-99febca8d8be-kube-api-access-tqbc2\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.605617 5049 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.660665 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-679b885964-9p8nj"] Jan 27 17:17:56 crc kubenswrapper[5049]: E0127 17:17:56.663909 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4ec2c4a-f699-4b6e-b9f4-90f83647167e" containerName="keystone-bootstrap" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.663940 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4ec2c4a-f699-4b6e-b9f4-90f83647167e" containerName="keystone-bootstrap" Jan 27 17:17:56 crc kubenswrapper[5049]: E0127 17:17:56.663996 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebf7681-25b8-4db9-a7b2-86fca3ddc37c" containerName="barbican-db-sync" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.664006 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebf7681-25b8-4db9-a7b2-86fca3ddc37c" containerName="barbican-db-sync" Jan 27 17:17:56 crc kubenswrapper[5049]: E0127 17:17:56.664032 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032e489f-aab0-40a8-b7ce-99febca8d8be" containerName="placement-db-sync" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.664043 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="032e489f-aab0-40a8-b7ce-99febca8d8be" containerName="placement-db-sync" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.665928 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ec2c4a-f699-4b6e-b9f4-90f83647167e" containerName="keystone-bootstrap" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.665975 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebf7681-25b8-4db9-a7b2-86fca3ddc37c" containerName="barbican-db-sync" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.666000 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="032e489f-aab0-40a8-b7ce-99febca8d8be" containerName="placement-db-sync" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.672752 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-679b885964-9p8nj"] Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.672887 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.677013 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.677233 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.679168 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ztd82" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.679402 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.680464 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.680646 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.809300 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-credential-keys\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.809362 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-internal-tls-certs\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.809392 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-scripts\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.809465 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-config-data\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.809481 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-fernet-keys\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.809498 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znz92\" (UniqueName: \"kubernetes.io/projected/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-kube-api-access-znz92\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.809512 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-public-tls-certs\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.809538 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-combined-ca-bundle\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.830723 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6b7fd485fd-v5jzn"] Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.839578 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.848284 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-99bdp" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.848592 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.848792 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.881930 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b7fd485fd-v5jzn"] Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.896930 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b447db58f-bdtv6"] Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.898290 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.911998 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.912784 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-config-data\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.912824 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-fernet-keys\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.912846 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znz92\" (UniqueName: \"kubernetes.io/projected/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-kube-api-access-znz92\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.912862 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-public-tls-certs\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.912890 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-combined-ca-bundle\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.912969 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-credential-keys\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.912995 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-internal-tls-certs\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.913030 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-scripts\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.917479 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-scripts\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.918475 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-public-tls-certs\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.921837 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b447db58f-bdtv6"] Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.922390 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-credential-keys\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.922591 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-fernet-keys\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.922634 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-config-data\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.924204 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-internal-tls-certs\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.925093 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-combined-ca-bundle\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:56 crc kubenswrapper[5049]: I0127 17:17:56.992873 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znz92\" (UniqueName: \"kubernetes.io/projected/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-kube-api-access-znz92\") pod \"keystone-679b885964-9p8nj\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.016029 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017176 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017212 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017261 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017294 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76806163-4660-4265-ba1b-ed85f6d8c464-logs\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017311 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shsgc\" (UniqueName: \"kubernetes.io/projected/76806163-4660-4265-ba1b-ed85f6d8c464-kube-api-access-shsgc\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017334 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data-custom\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017350 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-592f9\" (UniqueName: \"kubernetes.io/projected/ca9c6d10-6357-4632-9c0f-ff477e8526f0-kube-api-access-592f9\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017387 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-combined-ca-bundle\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017420 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data-custom\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.017445 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9c6d10-6357-4632-9c0f-ff477e8526f0-logs\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.019073 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vpd8n"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.032813 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vpd8n"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.032934 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.036253 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fb468df94-7s5tf"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.037648 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.044179 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5d9fdfc85c-bpzmb"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.048292 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.048478 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fb468df94-7s5tf"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.058741 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d9fdfc85c-bpzmb"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119436 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119493 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76806163-4660-4265-ba1b-ed85f6d8c464-logs\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119521 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shsgc\" (UniqueName: \"kubernetes.io/projected/76806163-4660-4265-ba1b-ed85f6d8c464-kube-api-access-shsgc\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119559 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data-custom\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119612 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-592f9\" (UniqueName: \"kubernetes.io/projected/ca9c6d10-6357-4632-9c0f-ff477e8526f0-kube-api-access-592f9\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119660 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-combined-ca-bundle\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119722 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data-custom\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119764 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9c6d10-6357-4632-9c0f-ff477e8526f0-logs\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119804 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.119840 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.128888 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76806163-4660-4265-ba1b-ed85f6d8c464-logs\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.131800 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.132074 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9c6d10-6357-4632-9c0f-ff477e8526f0-logs\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.151005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-combined-ca-bundle\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.152069 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-combined-ca-bundle\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.160219 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data-custom\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.167881 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-592f9\" (UniqueName: \"kubernetes.io/projected/ca9c6d10-6357-4632-9c0f-ff477e8526f0-kube-api-access-592f9\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.177862 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data\") pod \"barbican-worker-5b447db58f-bdtv6\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.180157 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66dcf5f4cb-7jts2"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.181374 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shsgc\" (UniqueName: \"kubernetes.io/projected/76806163-4660-4265-ba1b-ed85f6d8c464-kube-api-access-shsgc\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.186359 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.186869 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data-custom\") pod \"barbican-keystone-listener-6b7fd485fd-v5jzn\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.188523 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.208972 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66dcf5f4cb-7jts2"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222499 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-config\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222553 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data-custom\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222581 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmlk5\" (UniqueName: \"kubernetes.io/projected/adfa2378-a75a-41b5-9ea9-71c8da89f750-kube-api-access-wmlk5\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222603 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222627 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-combined-ca-bundle\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222651 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222684 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222712 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222737 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222765 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data-custom\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222779 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.222797 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-combined-ca-bundle\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.223417 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25ad8919-34a1-4d3c-8f82-a8902bc857ff-logs\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.223449 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfa2378-a75a-41b5-9ea9-71c8da89f750-logs\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.223465 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffzjv\" (UniqueName: \"kubernetes.io/projected/25ad8919-34a1-4d3c-8f82-a8902bc857ff-kube-api-access-ffzjv\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.224278 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkcl\" (UniqueName: \"kubernetes.io/projected/bd3e6069-c811-467a-af8a-8aa62931fba7-kube-api-access-ckkcl\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.294268 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326117 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data-custom\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326168 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmlk5\" (UniqueName: \"kubernetes.io/projected/adfa2378-a75a-41b5-9ea9-71c8da89f750-kube-api-access-wmlk5\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326190 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326215 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-combined-ca-bundle\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326241 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326263 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326301 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326326 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326355 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data-custom\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326370 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326386 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-combined-ca-bundle\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326402 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25ad8919-34a1-4d3c-8f82-a8902bc857ff-logs\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326425 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-combined-ca-bundle\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326447 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfa2378-a75a-41b5-9ea9-71c8da89f750-logs\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326463 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffzjv\" (UniqueName: \"kubernetes.io/projected/25ad8919-34a1-4d3c-8f82-a8902bc857ff-kube-api-access-ffzjv\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326481 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e4e251-30ea-4628-9490-d88425a297ce-logs\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326500 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkcl\" (UniqueName: \"kubernetes.io/projected/bd3e6069-c811-467a-af8a-8aa62931fba7-kube-api-access-ckkcl\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326523 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data-custom\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326546 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnc29\" (UniqueName: \"kubernetes.io/projected/53e4e251-30ea-4628-9490-d88425a297ce-kube-api-access-cnc29\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326577 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.326603 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-config\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.328796 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-config\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.329487 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25ad8919-34a1-4d3c-8f82-a8902bc857ff-logs\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.330170 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.331200 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfa2378-a75a-41b5-9ea9-71c8da89f750-logs\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.331858 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.331882 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.332323 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data-custom\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.336550 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-combined-ca-bundle\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.338414 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.341447 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data-custom\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.342228 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.343169 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffzjv\" (UniqueName: \"kubernetes.io/projected/25ad8919-34a1-4d3c-8f82-a8902bc857ff-kube-api-access-ffzjv\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.344423 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data\") pod \"barbican-worker-5d9fdfc85c-bpzmb\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.345790 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkcl\" (UniqueName: \"kubernetes.io/projected/bd3e6069-c811-467a-af8a-8aa62931fba7-kube-api-access-ckkcl\") pod \"dnsmasq-dns-586bdc5f9-vpd8n\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.345970 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmlk5\" (UniqueName: \"kubernetes.io/projected/adfa2378-a75a-41b5-9ea9-71c8da89f750-kube-api-access-wmlk5\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.356118 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-combined-ca-bundle\") pod \"barbican-keystone-listener-fb468df94-7s5tf\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.356956 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.436402 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-combined-ca-bundle\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.436472 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e4e251-30ea-4628-9490-d88425a297ce-logs\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.436657 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data-custom\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.436854 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnc29\" (UniqueName: \"kubernetes.io/projected/53e4e251-30ea-4628-9490-d88425a297ce-kube-api-access-cnc29\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.436918 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.438106 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e4e251-30ea-4628-9490-d88425a297ce-logs\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.447939 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-combined-ca-bundle\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.449156 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.449726 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data-custom\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.478892 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnc29\" (UniqueName: \"kubernetes.io/projected/53e4e251-30ea-4628-9490-d88425a297ce-kube-api-access-cnc29\") pod \"barbican-api-66dcf5f4cb-7jts2\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.479523 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.513921 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c4bd57ddb-fz2dp"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.516300 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.522553 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.522793 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.522896 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.523463 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.523568 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-l9ptw" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.538184 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.544094 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85620b2d-c74a-4c51-8129-c747016dc357-logs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.544136 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694c9\" (UniqueName: \"kubernetes.io/projected/85620b2d-c74a-4c51-8129-c747016dc357-kube-api-access-694c9\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.544187 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-scripts\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.544207 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-internal-tls-certs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.544252 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-public-tls-certs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.544284 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-combined-ca-bundle\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.544318 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-config-data\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.546869 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c4bd57ddb-fz2dp"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.550923 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.568177 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.588431 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-679b885964-9p8nj"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.594303 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5bc08fc1-cb54-4c3f-888d-89c9ea303a80","Type":"ContainerStarted","Data":"23107b5e5f1b2d4827d68b36eeea40a74e59b5f3403e846ebcd74816a3ab0840"} Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.631219 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.631199603 podStartE2EDuration="6.631199603s" podCreationTimestamp="2026-01-27 17:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:57.617441989 +0000 UTC m=+1252.716415538" watchObservedRunningTime="2026-01-27 17:17:57.631199603 +0000 UTC m=+1252.730173152" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.645480 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-combined-ca-bundle\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.645529 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-config-data\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.645605 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85620b2d-c74a-4c51-8129-c747016dc357-logs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.645627 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694c9\" (UniqueName: \"kubernetes.io/projected/85620b2d-c74a-4c51-8129-c747016dc357-kube-api-access-694c9\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.645661 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-scripts\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.645689 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-internal-tls-certs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.645722 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-public-tls-certs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.647514 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85620b2d-c74a-4c51-8129-c747016dc357-logs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.656212 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-combined-ca-bundle\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.663083 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-internal-tls-certs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.664950 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-scripts\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.670894 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-config-data\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.671263 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694c9\" (UniqueName: \"kubernetes.io/projected/85620b2d-c74a-4c51-8129-c747016dc357-kube-api-access-694c9\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.671630 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-public-tls-certs\") pod \"placement-6c4bd57ddb-fz2dp\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.844833 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.862267 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b447db58f-bdtv6"] Jan 27 17:17:57 crc kubenswrapper[5049]: I0127 17:17:57.900608 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vpd8n"] Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.100828 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6b7fd485fd-v5jzn"] Jan 27 17:17:58 crc kubenswrapper[5049]: W0127 17:17:58.121375 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76806163_4660_4265_ba1b_ed85f6d8c464.slice/crio-081604bcac92a609102630edaadaf318044bf585b4e7058ac2eba2ec192339fc WatchSource:0}: Error finding container 081604bcac92a609102630edaadaf318044bf585b4e7058ac2eba2ec192339fc: Status 404 returned error can't find the container with id 081604bcac92a609102630edaadaf318044bf585b4e7058ac2eba2ec192339fc Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.213141 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fb468df94-7s5tf"] Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.227528 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66dcf5f4cb-7jts2"] Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.301561 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c4bd57ddb-fz2dp"] Jan 27 17:17:58 crc kubenswrapper[5049]: W0127 17:17:58.320870 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85620b2d_c74a_4c51_8129_c747016dc357.slice/crio-13733998ed833daa8145197efcef30ead77e37441e6640da09acba5580372e51 WatchSource:0}: Error finding container 13733998ed833daa8145197efcef30ead77e37441e6640da09acba5580372e51: Status 404 returned error can't find the container with id 13733998ed833daa8145197efcef30ead77e37441e6640da09acba5580372e51 Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.346414 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5d9fdfc85c-bpzmb"] Jan 27 17:17:58 crc kubenswrapper[5049]: W0127 17:17:58.365243 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25ad8919_34a1_4d3c_8f82_a8902bc857ff.slice/crio-79b865e7beb8089343e65f2cf1b89a0a0a1e891828529073ad42d44609f0433b WatchSource:0}: Error finding container 79b865e7beb8089343e65f2cf1b89a0a0a1e891828529073ad42d44609f0433b: Status 404 returned error can't find the container with id 79b865e7beb8089343e65f2cf1b89a0a0a1e891828529073ad42d44609f0433b Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.618082 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" event={"ID":"76806163-4660-4265-ba1b-ed85f6d8c464","Type":"ContainerStarted","Data":"081604bcac92a609102630edaadaf318044bf585b4e7058ac2eba2ec192339fc"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.621535 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679b885964-9p8nj" event={"ID":"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e","Type":"ContainerStarted","Data":"064ca2d2f070edfd81bea729538cd5a51f364f47fe874ac76e3a571a97d5681c"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.621573 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679b885964-9p8nj" event={"ID":"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e","Type":"ContainerStarted","Data":"1daa8674aa8e85c04ee4652965be4a4ef4c6b9585153c6224aa29de71ec45ff9"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.621587 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.636954 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" event={"ID":"25ad8919-34a1-4d3c-8f82-a8902bc857ff","Type":"ContainerStarted","Data":"79b865e7beb8089343e65f2cf1b89a0a0a1e891828529073ad42d44609f0433b"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.639841 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b447db58f-bdtv6" event={"ID":"ca9c6d10-6357-4632-9c0f-ff477e8526f0","Type":"ContainerStarted","Data":"c6a3c65c760b7689605cf05f2bd85f94d21d09f609f7509872ea03e672ea43a0"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.652458 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dcf5f4cb-7jts2" event={"ID":"53e4e251-30ea-4628-9490-d88425a297ce","Type":"ContainerStarted","Data":"4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.652500 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dcf5f4cb-7jts2" event={"ID":"53e4e251-30ea-4628-9490-d88425a297ce","Type":"ContainerStarted","Data":"12493c0000b71cd932e93ed82cd14bb85ddf2a5900d2f5c84ff3365f2319945a"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.655014 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-679b885964-9p8nj" podStartSLOduration=2.6549995600000003 podStartE2EDuration="2.65499956s" podCreationTimestamp="2026-01-27 17:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:58.640757802 +0000 UTC m=+1253.739731341" watchObservedRunningTime="2026-01-27 17:17:58.65499956 +0000 UTC m=+1253.753973109" Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.667741 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4bd57ddb-fz2dp" event={"ID":"85620b2d-c74a-4c51-8129-c747016dc357","Type":"ContainerStarted","Data":"13733998ed833daa8145197efcef30ead77e37441e6640da09acba5580372e51"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.670415 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" event={"ID":"adfa2378-a75a-41b5-9ea9-71c8da89f750","Type":"ContainerStarted","Data":"4b96bf1d48df908af869429f502c5e9a251dcc55a8adde567eee0b8a31a9912b"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.675329 5049 generic.go:334] "Generic (PLEG): container finished" podID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerID="fbbf30b55f9b7a326a1f301393730edd8980a4eb58d739a9bba4e7c9224fcac0" exitCode=0 Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.677800 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" event={"ID":"bd3e6069-c811-467a-af8a-8aa62931fba7","Type":"ContainerDied","Data":"fbbf30b55f9b7a326a1f301393730edd8980a4eb58d739a9bba4e7c9224fcac0"} Jan 27 17:17:58 crc kubenswrapper[5049]: I0127 17:17:58.677828 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" event={"ID":"bd3e6069-c811-467a-af8a-8aa62931fba7","Type":"ContainerStarted","Data":"1b2b908342669259f60197fa77404517844c907a708de230d4eeefb060c64b40"} Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.666194 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c874955f4-txmc8"] Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.671457 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.675108 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.675435 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c874955f4-txmc8"] Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.681239 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.689209 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dcf5f4cb-7jts2" event={"ID":"53e4e251-30ea-4628-9490-d88425a297ce","Type":"ContainerStarted","Data":"1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af"} Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.689281 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.689370 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.695364 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4bd57ddb-fz2dp" event={"ID":"85620b2d-c74a-4c51-8129-c747016dc357","Type":"ContainerStarted","Data":"11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff"} Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.701789 5049 generic.go:334] "Generic (PLEG): container finished" podID="97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7" containerID="a03fcd74978f09ca045dfe9c61b9f24cf5e346044f862b8fae04cbbf4b1c4ef2" exitCode=0 Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.702459 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9wgjf" event={"ID":"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7","Type":"ContainerDied","Data":"a03fcd74978f09ca045dfe9c61b9f24cf5e346044f862b8fae04cbbf4b1c4ef2"} Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.712781 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.712830 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.727413 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66dcf5f4cb-7jts2" podStartSLOduration=2.727394456 podStartE2EDuration="2.727394456s" podCreationTimestamp="2026-01-27 17:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:17:59.724703869 +0000 UTC m=+1254.823677438" watchObservedRunningTime="2026-01-27 17:17:59.727394456 +0000 UTC m=+1254.826368005" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.731955 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954hb\" (UniqueName: \"kubernetes.io/projected/fd8752fa-c3a1-4eba-91dc-6af200eb8168-kube-api-access-954hb\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.732019 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd8752fa-c3a1-4eba-91dc-6af200eb8168-logs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.732055 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.732186 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data-custom\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.732497 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-internal-tls-certs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.732533 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-combined-ca-bundle\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.732623 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-public-tls-certs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.755778 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.772227 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.836412 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954hb\" (UniqueName: \"kubernetes.io/projected/fd8752fa-c3a1-4eba-91dc-6af200eb8168-kube-api-access-954hb\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.836840 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd8752fa-c3a1-4eba-91dc-6af200eb8168-logs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.837000 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.838581 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data-custom\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.838965 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-internal-tls-certs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.839147 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-combined-ca-bundle\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.839252 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-public-tls-certs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.837304 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd8752fa-c3a1-4eba-91dc-6af200eb8168-logs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.845129 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data-custom\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.846079 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-public-tls-certs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.847469 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-internal-tls-certs\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.848878 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.852159 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-combined-ca-bundle\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.863160 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954hb\" (UniqueName: \"kubernetes.io/projected/fd8752fa-c3a1-4eba-91dc-6af200eb8168-kube-api-access-954hb\") pod \"barbican-api-6c874955f4-txmc8\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:17:59 crc kubenswrapper[5049]: I0127 17:17:59.986266 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.486180 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c874955f4-txmc8"] Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.720636 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4bd57ddb-fz2dp" event={"ID":"85620b2d-c74a-4c51-8129-c747016dc357","Type":"ContainerStarted","Data":"2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b"} Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.720793 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.721065 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.724446 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" event={"ID":"bd3e6069-c811-467a-af8a-8aa62931fba7","Type":"ContainerStarted","Data":"abbb0a60f1744b376601053b5c64431abcdf94fe2ed2b16d7237e818daa49772"} Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.725033 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.725052 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.725064 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 17:18:00 crc kubenswrapper[5049]: I0127 17:18:00.758660 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c4bd57ddb-fz2dp" podStartSLOduration=3.758637405 podStartE2EDuration="3.758637405s" podCreationTimestamp="2026-01-27 17:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:00.740148497 +0000 UTC m=+1255.839122096" watchObservedRunningTime="2026-01-27 17:18:00.758637405 +0000 UTC m=+1255.857610964" Jan 27 17:18:02 crc kubenswrapper[5049]: W0127 17:18:02.147205 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd8752fa_c3a1_4eba_91dc_6af200eb8168.slice/crio-e439646be44da8ade07bfbde5536e4e5d953c83d527d70a6d3581887b1bd1073 WatchSource:0}: Error finding container e439646be44da8ade07bfbde5536e4e5d953c83d527d70a6d3581887b1bd1073: Status 404 returned error can't find the container with id e439646be44da8ade07bfbde5536e4e5d953c83d527d70a6d3581887b1bd1073 Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.209770 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.209819 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.243281 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.254805 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.256877 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.268042 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" podStartSLOduration=6.268023008 podStartE2EDuration="6.268023008s" podCreationTimestamp="2026-01-27 17:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:00.773918392 +0000 UTC m=+1255.872891941" watchObservedRunningTime="2026-01-27 17:18:02.268023008 +0000 UTC m=+1257.366996557" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.388611 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2t2x\" (UniqueName: \"kubernetes.io/projected/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-kube-api-access-f2t2x\") pod \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.388858 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-config\") pod \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.388941 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-combined-ca-bundle\") pod \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\" (UID: \"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7\") " Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.401728 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-kube-api-access-f2t2x" (OuterVolumeSpecName: "kube-api-access-f2t2x") pod "97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7" (UID: "97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7"). InnerVolumeSpecName "kube-api-access-f2t2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.420228 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7" (UID: "97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.422343 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-config" (OuterVolumeSpecName: "config") pod "97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7" (UID: "97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.491361 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2t2x\" (UniqueName: \"kubernetes.io/projected/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-kube-api-access-f2t2x\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.491396 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.491406 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.756620 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9wgjf" event={"ID":"97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7","Type":"ContainerDied","Data":"50317188dc38bed51f45519d79fc487927684e4caed29375c4af8f21b487709b"} Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.756689 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50317188dc38bed51f45519d79fc487927684e4caed29375c4af8f21b487709b" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.756648 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9wgjf" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.758246 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c874955f4-txmc8" event={"ID":"fd8752fa-c3a1-4eba-91dc-6af200eb8168","Type":"ContainerStarted","Data":"e439646be44da8ade07bfbde5536e4e5d953c83d527d70a6d3581887b1bd1073"} Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.758437 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.758490 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.871834 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.871940 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 17:18:02 crc kubenswrapper[5049]: I0127 17:18:02.962311 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.582867 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vpd8n"] Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.583319 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" podUID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerName="dnsmasq-dns" containerID="cri-o://abbb0a60f1744b376601053b5c64431abcdf94fe2ed2b16d7237e818daa49772" gracePeriod=10 Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.619786 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pf7p5"] Jan 27 17:18:03 crc kubenswrapper[5049]: E0127 17:18:03.620429 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7" containerName="neutron-db-sync" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.620442 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7" containerName="neutron-db-sync" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.620632 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7" containerName="neutron-db-sync" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.621735 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.630112 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pf7p5"] Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.671790 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d99c987d8-s77jv"] Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.673296 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.674844 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4gthl" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.676219 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.679231 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.679847 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.691865 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d99c987d8-s77jv"] Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.745911 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.745983 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746106 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-config\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746129 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlnt\" (UniqueName: \"kubernetes.io/projected/43c1b33a-6a3f-41d0-9df3-08eb35e89315-kube-api-access-fhlnt\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746147 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-combined-ca-bundle\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746179 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-config\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746208 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-httpd-config\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746256 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746284 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-ovndb-tls-certs\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746340 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9p5c\" (UniqueName: \"kubernetes.io/projected/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-kube-api-access-h9p5c\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.746357 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.782974 5049 generic.go:334] "Generic (PLEG): container finished" podID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerID="abbb0a60f1744b376601053b5c64431abcdf94fe2ed2b16d7237e818daa49772" exitCode=0 Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.783050 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" event={"ID":"bd3e6069-c811-467a-af8a-8aa62931fba7","Type":"ContainerDied","Data":"abbb0a60f1744b376601053b5c64431abcdf94fe2ed2b16d7237e818daa49772"} Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.848885 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849014 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhlnt\" (UniqueName: \"kubernetes.io/projected/43c1b33a-6a3f-41d0-9df3-08eb35e89315-kube-api-access-fhlnt\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849039 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-config\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849073 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-combined-ca-bundle\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849094 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-config\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849123 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-httpd-config\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849176 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849201 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-ovndb-tls-certs\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849256 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9p5c\" (UniqueName: \"kubernetes.io/projected/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-kube-api-access-h9p5c\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849277 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.849390 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.850500 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.851114 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.854958 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.855099 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.855888 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-ovndb-tls-certs\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.858475 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-config\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.864293 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-combined-ca-bundle\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.865147 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-config\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.867441 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhlnt\" (UniqueName: \"kubernetes.io/projected/43c1b33a-6a3f-41d0-9df3-08eb35e89315-kube-api-access-fhlnt\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.871281 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9p5c\" (UniqueName: \"kubernetes.io/projected/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-kube-api-access-h9p5c\") pod \"dnsmasq-dns-85ff748b95-pf7p5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:03 crc kubenswrapper[5049]: I0127 17:18:03.874811 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-httpd-config\") pod \"neutron-6d99c987d8-s77jv\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:04 crc kubenswrapper[5049]: I0127 17:18:04.019596 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:04 crc kubenswrapper[5049]: I0127 17:18:04.024912 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:04 crc kubenswrapper[5049]: I0127 17:18:04.945607 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 17:18:04 crc kubenswrapper[5049]: I0127 17:18:04.946033 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 17:18:04 crc kubenswrapper[5049]: I0127 17:18:04.948541 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.465316 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6ffd87fcd5-fn4z7"] Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.474953 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.478353 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.478449 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.510413 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ffd87fcd5-fn4z7"] Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.598520 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-public-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.598590 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-internal-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.598775 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-ovndb-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.598845 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-combined-ca-bundle\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.599258 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-config\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.599287 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kmb\" (UniqueName: \"kubernetes.io/projected/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-kube-api-access-h5kmb\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.599332 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-httpd-config\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.701535 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-ovndb-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.701586 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-combined-ca-bundle\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.701692 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-config\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.701718 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kmb\" (UniqueName: \"kubernetes.io/projected/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-kube-api-access-h5kmb\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.702487 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-httpd-config\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.702617 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-public-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.702638 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-internal-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.709119 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-public-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.710763 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-httpd-config\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.711136 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-ovndb-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.711950 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-config\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.712498 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-combined-ca-bundle\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.717203 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-internal-tls-certs\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.730600 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kmb\" (UniqueName: \"kubernetes.io/projected/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-kube-api-access-h5kmb\") pod \"neutron-6ffd87fcd5-fn4z7\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:06 crc kubenswrapper[5049]: I0127 17:18:06.800017 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:07 crc kubenswrapper[5049]: I0127 17:18:07.358697 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" podUID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.770573 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.838782 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" event={"ID":"bd3e6069-c811-467a-af8a-8aa62931fba7","Type":"ContainerDied","Data":"1b2b908342669259f60197fa77404517844c907a708de230d4eeefb060c64b40"} Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.839047 5049 scope.go:117] "RemoveContainer" containerID="abbb0a60f1744b376601053b5c64431abcdf94fe2ed2b16d7237e818daa49772" Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.839185 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-vpd8n" Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.931217 5049 scope.go:117] "RemoveContainer" containerID="fbbf30b55f9b7a326a1f301393730edd8980a4eb58d739a9bba4e7c9224fcac0" Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.939360 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-sb\") pod \"bd3e6069-c811-467a-af8a-8aa62931fba7\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.939432 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-config\") pod \"bd3e6069-c811-467a-af8a-8aa62931fba7\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.939456 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckkcl\" (UniqueName: \"kubernetes.io/projected/bd3e6069-c811-467a-af8a-8aa62931fba7-kube-api-access-ckkcl\") pod \"bd3e6069-c811-467a-af8a-8aa62931fba7\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.939493 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-svc\") pod \"bd3e6069-c811-467a-af8a-8aa62931fba7\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.939511 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-swift-storage-0\") pod \"bd3e6069-c811-467a-af8a-8aa62931fba7\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.939621 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-nb\") pod \"bd3e6069-c811-467a-af8a-8aa62931fba7\" (UID: \"bd3e6069-c811-467a-af8a-8aa62931fba7\") " Jan 27 17:18:08 crc kubenswrapper[5049]: I0127 17:18:08.959930 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3e6069-c811-467a-af8a-8aa62931fba7-kube-api-access-ckkcl" (OuterVolumeSpecName: "kube-api-access-ckkcl") pod "bd3e6069-c811-467a-af8a-8aa62931fba7" (UID: "bd3e6069-c811-467a-af8a-8aa62931fba7"). InnerVolumeSpecName "kube-api-access-ckkcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.042137 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckkcl\" (UniqueName: \"kubernetes.io/projected/bd3e6069-c811-467a-af8a-8aa62931fba7-kube-api-access-ckkcl\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.070203 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ffd87fcd5-fn4z7"] Jan 27 17:18:09 crc kubenswrapper[5049]: W0127 17:18:09.083081 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e0b118e_d036_4ae2_ac85_5ab90eeea2f5.slice/crio-bac231595afc205a3cd4fe58811d3bc5fcfe4bf090e3230f905e6a02cba693c9 WatchSource:0}: Error finding container bac231595afc205a3cd4fe58811d3bc5fcfe4bf090e3230f905e6a02cba693c9: Status 404 returned error can't find the container with id bac231595afc205a3cd4fe58811d3bc5fcfe4bf090e3230f905e6a02cba693c9 Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.141263 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-config" (OuterVolumeSpecName: "config") pod "bd3e6069-c811-467a-af8a-8aa62931fba7" (UID: "bd3e6069-c811-467a-af8a-8aa62931fba7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.144366 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd3e6069-c811-467a-af8a-8aa62931fba7" (UID: "bd3e6069-c811-467a-af8a-8aa62931fba7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.144880 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.144907 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.161219 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd3e6069-c811-467a-af8a-8aa62931fba7" (UID: "bd3e6069-c811-467a-af8a-8aa62931fba7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.179398 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.189070 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd3e6069-c811-467a-af8a-8aa62931fba7" (UID: "bd3e6069-c811-467a-af8a-8aa62931fba7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.204559 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd3e6069-c811-467a-af8a-8aa62931fba7" (UID: "bd3e6069-c811-467a-af8a-8aa62931fba7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.249346 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.249387 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.249401 5049 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd3e6069-c811-467a-af8a-8aa62931fba7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.255729 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d99c987d8-s77jv"] Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.283749 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pf7p5"] Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.471596 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vpd8n"] Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.486269 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.535857 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-vpd8n"] Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.659022 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3e6069-c811-467a-af8a-8aa62931fba7" path="/var/lib/kubelet/pods/bd3e6069-c811-467a-af8a-8aa62931fba7/volumes" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.875082 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d99c987d8-s77jv" event={"ID":"43c1b33a-6a3f-41d0-9df3-08eb35e89315","Type":"ContainerStarted","Data":"0076d4f73b147d4a839ee433c3cbd535c66b734bebda823c63537a255648784d"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.885527 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerStarted","Data":"d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.885644 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="ceilometer-central-agent" containerID="cri-o://6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66" gracePeriod=30 Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.885746 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.885986 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="sg-core" containerID="cri-o://00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe" gracePeriod=30 Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.886027 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="proxy-httpd" containerID="cri-o://d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342" gracePeriod=30 Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.886074 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="ceilometer-notification-agent" containerID="cri-o://200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4" gracePeriod=30 Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.899927 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c874955f4-txmc8" event={"ID":"fd8752fa-c3a1-4eba-91dc-6af200eb8168","Type":"ContainerStarted","Data":"d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.899977 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c874955f4-txmc8" event={"ID":"fd8752fa-c3a1-4eba-91dc-6af200eb8168","Type":"ContainerStarted","Data":"35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.900523 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.900936 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.906444 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" event={"ID":"adfa2378-a75a-41b5-9ea9-71c8da89f750","Type":"ContainerStarted","Data":"2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.924125 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.655711431 podStartE2EDuration="44.924105944s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="2026-01-27 17:17:26.378416885 +0000 UTC m=+1221.477390434" lastFinishedPulling="2026-01-27 17:18:08.646811408 +0000 UTC m=+1263.745784947" observedRunningTime="2026-01-27 17:18:09.918470903 +0000 UTC m=+1265.017444472" watchObservedRunningTime="2026-01-27 17:18:09.924105944 +0000 UTC m=+1265.023079493" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.931771 5049 generic.go:334] "Generic (PLEG): container finished" podID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" containerID="1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a" exitCode=0 Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.931854 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" event={"ID":"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5","Type":"ContainerDied","Data":"1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.931881 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" event={"ID":"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5","Type":"ContainerStarted","Data":"d28de34b7af17ea462c9c508e4a53a29eec7cd23446aa565287efbc9d3885e86"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.942446 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b447db58f-bdtv6" event={"ID":"ca9c6d10-6357-4632-9c0f-ff477e8526f0","Type":"ContainerStarted","Data":"02c12d734fa1941b4cc9bfcf5f2b4b9a40625cd00b35e0aaf8d32fae4ecdf5c3"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.957207 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c874955f4-txmc8" podStartSLOduration=10.95718533 podStartE2EDuration="10.95718533s" podCreationTimestamp="2026-01-27 17:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:09.951108676 +0000 UTC m=+1265.050082255" watchObservedRunningTime="2026-01-27 17:18:09.95718533 +0000 UTC m=+1265.056158879" Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.962190 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffd87fcd5-fn4z7" event={"ID":"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5","Type":"ContainerStarted","Data":"1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.962395 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffd87fcd5-fn4z7" event={"ID":"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5","Type":"ContainerStarted","Data":"bac231595afc205a3cd4fe58811d3bc5fcfe4bf090e3230f905e6a02cba693c9"} Jan 27 17:18:09 crc kubenswrapper[5049]: I0127 17:18:09.990062 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" event={"ID":"76806163-4660-4265-ba1b-ed85f6d8c464","Type":"ContainerStarted","Data":"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e"} Jan 27 17:18:10 crc kubenswrapper[5049]: I0127 17:18:10.006598 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" event={"ID":"25ad8919-34a1-4d3c-8f82-a8902bc857ff","Type":"ContainerStarted","Data":"6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca"} Jan 27 17:18:10 crc kubenswrapper[5049]: I0127 17:18:10.006643 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" event={"ID":"25ad8919-34a1-4d3c-8f82-a8902bc857ff","Type":"ContainerStarted","Data":"88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1"} Jan 27 17:18:10 crc kubenswrapper[5049]: I0127 17:18:10.030858 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" podStartSLOduration=3.779570178 podStartE2EDuration="14.030834936s" podCreationTimestamp="2026-01-27 17:17:56 +0000 UTC" firstStartedPulling="2026-01-27 17:17:58.370080122 +0000 UTC m=+1253.469053671" lastFinishedPulling="2026-01-27 17:18:08.62134488 +0000 UTC m=+1263.720318429" observedRunningTime="2026-01-27 17:18:10.027169671 +0000 UTC m=+1265.126143240" watchObservedRunningTime="2026-01-27 17:18:10.030834936 +0000 UTC m=+1265.129808485" Jan 27 17:18:10 crc kubenswrapper[5049]: I0127 17:18:10.063898 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5b447db58f-bdtv6"] Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.030195 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d99c987d8-s77jv" event={"ID":"43c1b33a-6a3f-41d0-9df3-08eb35e89315","Type":"ContainerStarted","Data":"9ade653a6243dab269f655544c3e6b332582ce038ff0bc3d90f5a5dc8b566b5d"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.031403 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d99c987d8-s77jv" event={"ID":"43c1b33a-6a3f-41d0-9df3-08eb35e89315","Type":"ContainerStarted","Data":"95ebf3fa581f6b2ef306585f39b1a074437c276cccd966b6119dbd09f2d3a5ec"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.031434 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.058452 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffd87fcd5-fn4z7" event={"ID":"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5","Type":"ContainerStarted","Data":"2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.058492 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.084577 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b447db58f-bdtv6" event={"ID":"ca9c6d10-6357-4632-9c0f-ff477e8526f0","Type":"ContainerStarted","Data":"ea17395c3b3bc8724181c64f6fb84ecf9ae264353cc2faa255466f7ac04ec368"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.084735 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5b447db58f-bdtv6" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerName="barbican-worker-log" containerID="cri-o://02c12d734fa1941b4cc9bfcf5f2b4b9a40625cd00b35e0aaf8d32fae4ecdf5c3" gracePeriod=30 Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.084795 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5b447db58f-bdtv6" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerName="barbican-worker" containerID="cri-o://ea17395c3b3bc8724181c64f6fb84ecf9ae264353cc2faa255466f7ac04ec368" gracePeriod=30 Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.089808 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d99c987d8-s77jv" podStartSLOduration=8.089788168 podStartE2EDuration="8.089788168s" podCreationTimestamp="2026-01-27 17:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:11.085093104 +0000 UTC m=+1266.184066653" watchObservedRunningTime="2026-01-27 17:18:11.089788168 +0000 UTC m=+1266.188761717" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.097811 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" event={"ID":"adfa2378-a75a-41b5-9ea9-71c8da89f750","Type":"ContainerStarted","Data":"50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.126237 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dqp4j" event={"ID":"bd627f49-e48d-4f81-a41c-3c753fdb27b3","Type":"ContainerStarted","Data":"ff18291a08fce870db4a8157c08cef6cde160a9942e9acc6a215e113f67e1c1b"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.137481 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" event={"ID":"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5","Type":"ContainerStarted","Data":"810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.138534 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.140710 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b447db58f-bdtv6" podStartSLOduration=4.536078751 podStartE2EDuration="15.140662982s" podCreationTimestamp="2026-01-27 17:17:56 +0000 UTC" firstStartedPulling="2026-01-27 17:17:57.930902673 +0000 UTC m=+1253.029876242" lastFinishedPulling="2026-01-27 17:18:08.535486924 +0000 UTC m=+1263.634460473" observedRunningTime="2026-01-27 17:18:11.131444109 +0000 UTC m=+1266.230417658" watchObservedRunningTime="2026-01-27 17:18:11.140662982 +0000 UTC m=+1266.239636531" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.152051 5049 generic.go:334] "Generic (PLEG): container finished" podID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerID="d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342" exitCode=0 Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.152093 5049 generic.go:334] "Generic (PLEG): container finished" podID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerID="00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe" exitCode=2 Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.152103 5049 generic.go:334] "Generic (PLEG): container finished" podID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerID="6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66" exitCode=0 Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.152163 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerDied","Data":"d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.152201 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerDied","Data":"00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.152212 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerDied","Data":"6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.198161 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" event={"ID":"76806163-4660-4265-ba1b-ed85f6d8c464","Type":"ContainerStarted","Data":"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9"} Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.245147 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6ffd87fcd5-fn4z7" podStartSLOduration=5.245118749 podStartE2EDuration="5.245118749s" podCreationTimestamp="2026-01-27 17:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:11.160690775 +0000 UTC m=+1266.259664314" watchObservedRunningTime="2026-01-27 17:18:11.245118749 +0000 UTC m=+1266.344092308" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.330359 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dqp4j" podStartSLOduration=5.1018736239999996 podStartE2EDuration="46.330335846s" podCreationTimestamp="2026-01-27 17:17:25 +0000 UTC" firstStartedPulling="2026-01-27 17:17:27.417908943 +0000 UTC m=+1222.516882492" lastFinishedPulling="2026-01-27 17:18:08.646371165 +0000 UTC m=+1263.745344714" observedRunningTime="2026-01-27 17:18:11.240784325 +0000 UTC m=+1266.339757884" watchObservedRunningTime="2026-01-27 17:18:11.330335846 +0000 UTC m=+1266.429309395" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.350859 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" podStartSLOduration=5.087399146 podStartE2EDuration="15.350837782s" podCreationTimestamp="2026-01-27 17:17:56 +0000 UTC" firstStartedPulling="2026-01-27 17:17:58.279338477 +0000 UTC m=+1253.378312026" lastFinishedPulling="2026-01-27 17:18:08.542777103 +0000 UTC m=+1263.641750662" observedRunningTime="2026-01-27 17:18:11.28186717 +0000 UTC m=+1266.380840719" watchObservedRunningTime="2026-01-27 17:18:11.350837782 +0000 UTC m=+1266.449811331" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.398998 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" podStartSLOduration=8.398972789 podStartE2EDuration="8.398972789s" podCreationTimestamp="2026-01-27 17:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:11.332054025 +0000 UTC m=+1266.431027574" watchObservedRunningTime="2026-01-27 17:18:11.398972789 +0000 UTC m=+1266.497946338" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.424404 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" podStartSLOduration=4.903570521 podStartE2EDuration="15.424385806s" podCreationTimestamp="2026-01-27 17:17:56 +0000 UTC" firstStartedPulling="2026-01-27 17:17:58.126124776 +0000 UTC m=+1253.225098315" lastFinishedPulling="2026-01-27 17:18:08.646940051 +0000 UTC m=+1263.745913600" observedRunningTime="2026-01-27 17:18:11.355845596 +0000 UTC m=+1266.454819155" watchObservedRunningTime="2026-01-27 17:18:11.424385806 +0000 UTC m=+1266.523359355" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.470066 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6b7fd485fd-v5jzn"] Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.630561 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.777354 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-run-httpd\") pod \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.777733 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-scripts\") pod \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.777837 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-combined-ca-bundle\") pod \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.777754 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e7a08784-0e34-4e50-8cca-4f2845e7a11e" (UID: "e7a08784-0e34-4e50-8cca-4f2845e7a11e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.778008 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h75z\" (UniqueName: \"kubernetes.io/projected/e7a08784-0e34-4e50-8cca-4f2845e7a11e-kube-api-access-9h75z\") pod \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.778184 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-log-httpd\") pod \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.778266 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-config-data\") pod \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.778366 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-sg-core-conf-yaml\") pod \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\" (UID: \"e7a08784-0e34-4e50-8cca-4f2845e7a11e\") " Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.778514 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e7a08784-0e34-4e50-8cca-4f2845e7a11e" (UID: "e7a08784-0e34-4e50-8cca-4f2845e7a11e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.779080 5049 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.779148 5049 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7a08784-0e34-4e50-8cca-4f2845e7a11e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.801973 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-scripts" (OuterVolumeSpecName: "scripts") pod "e7a08784-0e34-4e50-8cca-4f2845e7a11e" (UID: "e7a08784-0e34-4e50-8cca-4f2845e7a11e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.819791 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a08784-0e34-4e50-8cca-4f2845e7a11e-kube-api-access-9h75z" (OuterVolumeSpecName: "kube-api-access-9h75z") pod "e7a08784-0e34-4e50-8cca-4f2845e7a11e" (UID: "e7a08784-0e34-4e50-8cca-4f2845e7a11e"). InnerVolumeSpecName "kube-api-access-9h75z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.847742 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e7a08784-0e34-4e50-8cca-4f2845e7a11e" (UID: "e7a08784-0e34-4e50-8cca-4f2845e7a11e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.883521 5049 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.883565 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.883579 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h75z\" (UniqueName: \"kubernetes.io/projected/e7a08784-0e34-4e50-8cca-4f2845e7a11e-kube-api-access-9h75z\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.895524 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7a08784-0e34-4e50-8cca-4f2845e7a11e" (UID: "e7a08784-0e34-4e50-8cca-4f2845e7a11e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.916354 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-config-data" (OuterVolumeSpecName: "config-data") pod "e7a08784-0e34-4e50-8cca-4f2845e7a11e" (UID: "e7a08784-0e34-4e50-8cca-4f2845e7a11e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.984819 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:11 crc kubenswrapper[5049]: I0127 17:18:11.985138 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a08784-0e34-4e50-8cca-4f2845e7a11e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.220775 5049 generic.go:334] "Generic (PLEG): container finished" podID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerID="200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4" exitCode=0 Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.220833 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerDied","Data":"200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4"} Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.220859 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7a08784-0e34-4e50-8cca-4f2845e7a11e","Type":"ContainerDied","Data":"7fc49c991d1d3ff8bcd31d5130e69fcf2081deb9fe64fd8dd798f0270e7bc850"} Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.220877 5049 scope.go:117] "RemoveContainer" containerID="d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.221003 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.236072 5049 generic.go:334] "Generic (PLEG): container finished" podID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerID="02c12d734fa1941b4cc9bfcf5f2b4b9a40625cd00b35e0aaf8d32fae4ecdf5c3" exitCode=143 Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.239434 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b447db58f-bdtv6" event={"ID":"ca9c6d10-6357-4632-9c0f-ff477e8526f0","Type":"ContainerDied","Data":"02c12d734fa1941b4cc9bfcf5f2b4b9a40625cd00b35e0aaf8d32fae4ecdf5c3"} Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.250138 5049 scope.go:117] "RemoveContainer" containerID="00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.270795 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.280188 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.292628 5049 scope.go:117] "RemoveContainer" containerID="200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.299858 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.300271 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="ceilometer-notification-agent" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300297 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="ceilometer-notification-agent" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.300317 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerName="init" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300325 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerName="init" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.300347 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="proxy-httpd" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300356 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="proxy-httpd" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.300365 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="ceilometer-central-agent" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300372 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="ceilometer-central-agent" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.300385 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerName="dnsmasq-dns" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300392 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerName="dnsmasq-dns" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.300404 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="sg-core" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300411 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="sg-core" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300585 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="proxy-httpd" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300606 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="ceilometer-central-agent" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300622 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="sg-core" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300631 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" containerName="ceilometer-notification-agent" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.300643 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3e6069-c811-467a-af8a-8aa62931fba7" containerName="dnsmasq-dns" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.302185 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.308750 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.308979 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.320957 5049 scope.go:117] "RemoveContainer" containerID="6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.334995 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.359845 5049 scope.go:117] "RemoveContainer" containerID="d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.360247 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342\": container with ID starting with d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342 not found: ID does not exist" containerID="d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.360290 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342"} err="failed to get container status \"d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342\": rpc error: code = NotFound desc = could not find container \"d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342\": container with ID starting with d61a0ca18c22ce958496034447aeb4cd981c9d2b0dffdcdd0ef9a3f4b881b342 not found: ID does not exist" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.360322 5049 scope.go:117] "RemoveContainer" containerID="00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.360627 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe\": container with ID starting with 00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe not found: ID does not exist" containerID="00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.360658 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe"} err="failed to get container status \"00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe\": rpc error: code = NotFound desc = could not find container \"00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe\": container with ID starting with 00cbf2b69fcf3dd52e4c1ec077bc6a2cab9216d7e365f44245df5894090a0cbe not found: ID does not exist" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.360691 5049 scope.go:117] "RemoveContainer" containerID="200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.360899 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4\": container with ID starting with 200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4 not found: ID does not exist" containerID="200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.360942 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4"} err="failed to get container status \"200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4\": rpc error: code = NotFound desc = could not find container \"200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4\": container with ID starting with 200bb5266b92ae3cd871bcb96377d468ca251a22b506d07385742aa7bf8b7fc4 not found: ID does not exist" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.360964 5049 scope.go:117] "RemoveContainer" containerID="6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66" Jan 27 17:18:12 crc kubenswrapper[5049]: E0127 17:18:12.361211 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66\": container with ID starting with 6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66 not found: ID does not exist" containerID="6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.361269 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66"} err="failed to get container status \"6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66\": rpc error: code = NotFound desc = could not find container \"6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66\": container with ID starting with 6d19d3d52bbd315e0e87ef32bfe01d32fcf884ed918c24ae6ea7a2d07e792b66 not found: ID does not exist" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.492748 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-run-httpd\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.492848 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-config-data\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.492951 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.493003 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-log-httpd\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.493078 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-scripts\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.493123 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hdm7\" (UniqueName: \"kubernetes.io/projected/0b68f766-1ba0-4041-a963-8d115bacfc30-kube-api-access-6hdm7\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.493188 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.594648 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.594721 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-log-httpd\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.594777 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-scripts\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.594797 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hdm7\" (UniqueName: \"kubernetes.io/projected/0b68f766-1ba0-4041-a963-8d115bacfc30-kube-api-access-6hdm7\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.594841 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.594874 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-run-httpd\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.594891 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-config-data\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.596339 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-run-httpd\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.596408 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-log-httpd\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.600914 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-scripts\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.601163 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-config-data\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.615107 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.615702 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.617826 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hdm7\" (UniqueName: \"kubernetes.io/projected/0b68f766-1ba0-4041-a963-8d115bacfc30-kube-api-access-6hdm7\") pod \"ceilometer-0\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " pod="openstack/ceilometer-0" Jan 27 17:18:12 crc kubenswrapper[5049]: I0127 17:18:12.625464 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:18:13 crc kubenswrapper[5049]: I0127 17:18:13.151640 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:13 crc kubenswrapper[5049]: W0127 17:18:13.156053 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b68f766_1ba0_4041_a963_8d115bacfc30.slice/crio-1afa66230b35142f921633722943f21b796c96d8fe8ae1b26591ebdd0a1d2483 WatchSource:0}: Error finding container 1afa66230b35142f921633722943f21b796c96d8fe8ae1b26591ebdd0a1d2483: Status 404 returned error can't find the container with id 1afa66230b35142f921633722943f21b796c96d8fe8ae1b26591ebdd0a1d2483 Jan 27 17:18:13 crc kubenswrapper[5049]: I0127 17:18:13.248908 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" containerName="barbican-keystone-listener-log" containerID="cri-o://5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e" gracePeriod=30 Jan 27 17:18:13 crc kubenswrapper[5049]: I0127 17:18:13.249453 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerStarted","Data":"1afa66230b35142f921633722943f21b796c96d8fe8ae1b26591ebdd0a1d2483"} Jan 27 17:18:13 crc kubenswrapper[5049]: I0127 17:18:13.249511 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" containerName="barbican-keystone-listener" containerID="cri-o://703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9" gracePeriod=30 Jan 27 17:18:13 crc kubenswrapper[5049]: I0127 17:18:13.673043 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a08784-0e34-4e50-8cca-4f2845e7a11e" path="/var/lib/kubelet/pods/e7a08784-0e34-4e50-8cca-4f2845e7a11e/volumes" Jan 27 17:18:13 crc kubenswrapper[5049]: I0127 17:18:13.931976 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.031398 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data-custom\") pod \"76806163-4660-4265-ba1b-ed85f6d8c464\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.031607 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-combined-ca-bundle\") pod \"76806163-4660-4265-ba1b-ed85f6d8c464\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.031661 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76806163-4660-4265-ba1b-ed85f6d8c464-logs\") pod \"76806163-4660-4265-ba1b-ed85f6d8c464\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.031773 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data\") pod \"76806163-4660-4265-ba1b-ed85f6d8c464\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.031833 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shsgc\" (UniqueName: \"kubernetes.io/projected/76806163-4660-4265-ba1b-ed85f6d8c464-kube-api-access-shsgc\") pod \"76806163-4660-4265-ba1b-ed85f6d8c464\" (UID: \"76806163-4660-4265-ba1b-ed85f6d8c464\") " Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.032107 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76806163-4660-4265-ba1b-ed85f6d8c464-logs" (OuterVolumeSpecName: "logs") pod "76806163-4660-4265-ba1b-ed85f6d8c464" (UID: "76806163-4660-4265-ba1b-ed85f6d8c464"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.032747 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76806163-4660-4265-ba1b-ed85f6d8c464-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.036746 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76806163-4660-4265-ba1b-ed85f6d8c464-kube-api-access-shsgc" (OuterVolumeSpecName: "kube-api-access-shsgc") pod "76806163-4660-4265-ba1b-ed85f6d8c464" (UID: "76806163-4660-4265-ba1b-ed85f6d8c464"). InnerVolumeSpecName "kube-api-access-shsgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.037869 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "76806163-4660-4265-ba1b-ed85f6d8c464" (UID: "76806163-4660-4265-ba1b-ed85f6d8c464"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.062325 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76806163-4660-4265-ba1b-ed85f6d8c464" (UID: "76806163-4660-4265-ba1b-ed85f6d8c464"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.080333 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data" (OuterVolumeSpecName: "config-data") pod "76806163-4660-4265-ba1b-ed85f6d8c464" (UID: "76806163-4660-4265-ba1b-ed85f6d8c464"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.134879 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.134907 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.134917 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shsgc\" (UniqueName: \"kubernetes.io/projected/76806163-4660-4265-ba1b-ed85f6d8c464-kube-api-access-shsgc\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.134927 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76806163-4660-4265-ba1b-ed85f6d8c464-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.258267 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerStarted","Data":"cc854d20296f557cd4d5d9bb19c6c03d1e1ce6eba46ab19dccf9ef4e6ff34dc3"} Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.260183 5049 generic.go:334] "Generic (PLEG): container finished" podID="76806163-4660-4265-ba1b-ed85f6d8c464" containerID="703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9" exitCode=0 Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.260206 5049 generic.go:334] "Generic (PLEG): container finished" podID="76806163-4660-4265-ba1b-ed85f6d8c464" containerID="5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e" exitCode=143 Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.260220 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" event={"ID":"76806163-4660-4265-ba1b-ed85f6d8c464","Type":"ContainerDied","Data":"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9"} Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.260237 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" event={"ID":"76806163-4660-4265-ba1b-ed85f6d8c464","Type":"ContainerDied","Data":"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e"} Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.260248 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" event={"ID":"76806163-4660-4265-ba1b-ed85f6d8c464","Type":"ContainerDied","Data":"081604bcac92a609102630edaadaf318044bf585b4e7058ac2eba2ec192339fc"} Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.260265 5049 scope.go:117] "RemoveContainer" containerID="703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.260280 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6b7fd485fd-v5jzn" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.281634 5049 scope.go:117] "RemoveContainer" containerID="5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.296602 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6b7fd485fd-v5jzn"] Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.303279 5049 scope.go:117] "RemoveContainer" containerID="703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9" Jan 27 17:18:14 crc kubenswrapper[5049]: E0127 17:18:14.303703 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9\": container with ID starting with 703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9 not found: ID does not exist" containerID="703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.303740 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9"} err="failed to get container status \"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9\": rpc error: code = NotFound desc = could not find container \"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9\": container with ID starting with 703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9 not found: ID does not exist" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.303764 5049 scope.go:117] "RemoveContainer" containerID="5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e" Jan 27 17:18:14 crc kubenswrapper[5049]: E0127 17:18:14.304080 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e\": container with ID starting with 5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e not found: ID does not exist" containerID="5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.304120 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e"} err="failed to get container status \"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e\": rpc error: code = NotFound desc = could not find container \"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e\": container with ID starting with 5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e not found: ID does not exist" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.304146 5049 scope.go:117] "RemoveContainer" containerID="703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.304392 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9"} err="failed to get container status \"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9\": rpc error: code = NotFound desc = could not find container \"703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9\": container with ID starting with 703894016f38d135cd3c81b4bd840c7f058a6aad5aac52ca831fcd652637edf9 not found: ID does not exist" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.304417 5049 scope.go:117] "RemoveContainer" containerID="5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.304731 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e"} err="failed to get container status \"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e\": rpc error: code = NotFound desc = could not find container \"5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e\": container with ID starting with 5faf048fcc198c4c28042e3584803a4870a2557fa6c902de4758cf73a23e038e not found: ID does not exist" Jan 27 17:18:14 crc kubenswrapper[5049]: I0127 17:18:14.304906 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-6b7fd485fd-v5jzn"] Jan 27 17:18:14 crc kubenswrapper[5049]: E0127 17:18:14.440877 5049 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76806163_4660_4265_ba1b_ed85f6d8c464.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76806163_4660_4265_ba1b_ed85f6d8c464.slice/crio-081604bcac92a609102630edaadaf318044bf585b4e7058ac2eba2ec192339fc\": RecentStats: unable to find data in memory cache]" Jan 27 17:18:15 crc kubenswrapper[5049]: I0127 17:18:15.272683 5049 generic.go:334] "Generic (PLEG): container finished" podID="bd627f49-e48d-4f81-a41c-3c753fdb27b3" containerID="ff18291a08fce870db4a8157c08cef6cde160a9942e9acc6a215e113f67e1c1b" exitCode=0 Jan 27 17:18:15 crc kubenswrapper[5049]: I0127 17:18:15.272713 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dqp4j" event={"ID":"bd627f49-e48d-4f81-a41c-3c753fdb27b3","Type":"ContainerDied","Data":"ff18291a08fce870db4a8157c08cef6cde160a9942e9acc6a215e113f67e1c1b"} Jan 27 17:18:15 crc kubenswrapper[5049]: I0127 17:18:15.276237 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerStarted","Data":"54661e2c07ff58f9fc9388a5b237bae129bcfeaff2a90f47c3ff548a2c5e3364"} Jan 27 17:18:15 crc kubenswrapper[5049]: I0127 17:18:15.690540 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" path="/var/lib/kubelet/pods/76806163-4660-4265-ba1b-ed85f6d8c464/volumes" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.302415 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerStarted","Data":"02693659af0a8f83e6f343ad384ad7898ed03d93a76b5ac9b9e8d3acabfc953c"} Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.581051 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.587989 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.686931 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66dcf5f4cb-7jts2"] Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.687185 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66dcf5f4cb-7jts2" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api-log" containerID="cri-o://4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8" gracePeriod=30 Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.687702 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-66dcf5f4cb-7jts2" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api" containerID="cri-o://1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af" gracePeriod=30 Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.708414 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.881371 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-config-data\") pod \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.881432 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-db-sync-config-data\") pod \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.881472 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-combined-ca-bundle\") pod \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.881567 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhhz\" (UniqueName: \"kubernetes.io/projected/bd627f49-e48d-4f81-a41c-3c753fdb27b3-kube-api-access-9bhhz\") pod \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.881649 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd627f49-e48d-4f81-a41c-3c753fdb27b3-etc-machine-id\") pod \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.881720 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-scripts\") pod \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\" (UID: \"bd627f49-e48d-4f81-a41c-3c753fdb27b3\") " Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.882156 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd627f49-e48d-4f81-a41c-3c753fdb27b3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bd627f49-e48d-4f81-a41c-3c753fdb27b3" (UID: "bd627f49-e48d-4f81-a41c-3c753fdb27b3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.886934 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bd627f49-e48d-4f81-a41c-3c753fdb27b3" (UID: "bd627f49-e48d-4f81-a41c-3c753fdb27b3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.887752 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd627f49-e48d-4f81-a41c-3c753fdb27b3-kube-api-access-9bhhz" (OuterVolumeSpecName: "kube-api-access-9bhhz") pod "bd627f49-e48d-4f81-a41c-3c753fdb27b3" (UID: "bd627f49-e48d-4f81-a41c-3c753fdb27b3"). InnerVolumeSpecName "kube-api-access-9bhhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.887762 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-scripts" (OuterVolumeSpecName: "scripts") pod "bd627f49-e48d-4f81-a41c-3c753fdb27b3" (UID: "bd627f49-e48d-4f81-a41c-3c753fdb27b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.909989 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd627f49-e48d-4f81-a41c-3c753fdb27b3" (UID: "bd627f49-e48d-4f81-a41c-3c753fdb27b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.929403 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-config-data" (OuterVolumeSpecName: "config-data") pod "bd627f49-e48d-4f81-a41c-3c753fdb27b3" (UID: "bd627f49-e48d-4f81-a41c-3c753fdb27b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.983157 5049 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd627f49-e48d-4f81-a41c-3c753fdb27b3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.983187 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.983214 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.983222 5049 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.983232 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd627f49-e48d-4f81-a41c-3c753fdb27b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:16 crc kubenswrapper[5049]: I0127 17:18:16.983242 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bhhz\" (UniqueName: \"kubernetes.io/projected/bd627f49-e48d-4f81-a41c-3c753fdb27b3-kube-api-access-9bhhz\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.314707 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dqp4j" event={"ID":"bd627f49-e48d-4f81-a41c-3c753fdb27b3","Type":"ContainerDied","Data":"86ce77af3fb279939f43f7c7da15fbf300fc74b87c8bc92557c9ba62e88d0d0b"} Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.314763 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86ce77af3fb279939f43f7c7da15fbf300fc74b87c8bc92557c9ba62e88d0d0b" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.314874 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dqp4j" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.321940 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerStarted","Data":"be3176317fe4a09785260b85b11a6aedb8f5aa434ba3039a57f442778bacddf2"} Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.323424 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.327112 5049 generic.go:334] "Generic (PLEG): container finished" podID="53e4e251-30ea-4628-9490-d88425a297ce" containerID="4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8" exitCode=143 Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.327528 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dcf5f4cb-7jts2" event={"ID":"53e4e251-30ea-4628-9490-d88425a297ce","Type":"ContainerDied","Data":"4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8"} Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.348318 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.163426083 podStartE2EDuration="5.348300608s" podCreationTimestamp="2026-01-27 17:18:12 +0000 UTC" firstStartedPulling="2026-01-27 17:18:13.159139644 +0000 UTC m=+1268.258113193" lastFinishedPulling="2026-01-27 17:18:16.344014149 +0000 UTC m=+1271.442987718" observedRunningTime="2026-01-27 17:18:17.343521632 +0000 UTC m=+1272.442495171" watchObservedRunningTime="2026-01-27 17:18:17.348300608 +0000 UTC m=+1272.447274157" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.542216 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:17 crc kubenswrapper[5049]: E0127 17:18:17.542653 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd627f49-e48d-4f81-a41c-3c753fdb27b3" containerName="cinder-db-sync" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.544818 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd627f49-e48d-4f81-a41c-3c753fdb27b3" containerName="cinder-db-sync" Jan 27 17:18:17 crc kubenswrapper[5049]: E0127 17:18:17.544915 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" containerName="barbican-keystone-listener-log" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.544927 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" containerName="barbican-keystone-listener-log" Jan 27 17:18:17 crc kubenswrapper[5049]: E0127 17:18:17.544976 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" containerName="barbican-keystone-listener" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.544985 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" containerName="barbican-keystone-listener" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.545342 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" containerName="barbican-keystone-listener-log" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.545368 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd627f49-e48d-4f81-a41c-3c753fdb27b3" containerName="cinder-db-sync" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.545388 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="76806163-4660-4265-ba1b-ed85f6d8c464" containerName="barbican-keystone-listener" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.546296 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.551964 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.552338 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.552386 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-4klb7" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.552348 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.562197 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.614917 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pf7p5"] Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.615144 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" podUID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" containerName="dnsmasq-dns" containerID="cri-o://810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463" gracePeriod=10 Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.617983 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.683573 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hjxkn"] Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.685091 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.692465 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hjxkn"] Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.698955 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.699005 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.699048 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.699071 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.699090 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.699155 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qhb\" (UniqueName: \"kubernetes.io/projected/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-kube-api-access-84qhb\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.748077 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.754760 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.761562 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.777475 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.791216 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.792097 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.792604 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.795757 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"690eb8dd99a38db0e2d128dc8fae0eb0e7ee256d3467527d01896edbadf9fc55"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.796027 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://690eb8dd99a38db0e2d128dc8fae0eb0e7ee256d3467527d01896edbadf9fc55" gracePeriod=600 Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807356 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-config\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807441 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807497 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807588 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg2bj\" (UniqueName: \"kubernetes.io/projected/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-kube-api-access-hg2bj\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807622 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807657 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807741 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807814 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807879 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qhb\" (UniqueName: \"kubernetes.io/projected/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-kube-api-access-84qhb\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.807974 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.808008 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.808048 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.817456 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.820229 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-scripts\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.821741 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.832787 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.837368 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.837815 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qhb\" (UniqueName: \"kubernetes.io/projected/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-kube-api-access-84qhb\") pod \"cinder-scheduler-0\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.867281 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911587 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-scripts\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911650 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911712 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0a6987-5872-49b3-90a3-2edbc359e4f7-logs\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911755 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-config\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911796 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911811 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911840 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911854 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg2bj\" (UniqueName: \"kubernetes.io/projected/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-kube-api-access-hg2bj\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911874 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0a6987-5872-49b3-90a3-2edbc359e4f7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911910 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911929 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911957 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6kf\" (UniqueName: \"kubernetes.io/projected/fb0a6987-5872-49b3-90a3-2edbc359e4f7-kube-api-access-6q6kf\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.911986 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.912836 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.912983 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.913407 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.913571 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-config\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.913940 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:17 crc kubenswrapper[5049]: I0127 17:18:17.928405 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg2bj\" (UniqueName: \"kubernetes.io/projected/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-kube-api-access-hg2bj\") pod \"dnsmasq-dns-5c9776ccc5-hjxkn\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.015940 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.016012 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q6kf\" (UniqueName: \"kubernetes.io/projected/fb0a6987-5872-49b3-90a3-2edbc359e4f7-kube-api-access-6q6kf\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.016098 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-scripts\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.016163 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0a6987-5872-49b3-90a3-2edbc359e4f7-logs\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.016234 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.016253 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.016274 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0a6987-5872-49b3-90a3-2edbc359e4f7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.016385 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0a6987-5872-49b3-90a3-2edbc359e4f7-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.017509 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0a6987-5872-49b3-90a3-2edbc359e4f7-logs\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.023763 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.029645 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-scripts\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.030145 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.030537 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data-custom\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.041195 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q6kf\" (UniqueName: \"kubernetes.io/projected/fb0a6987-5872-49b3-90a3-2edbc359e4f7-kube-api-access-6q6kf\") pod \"cinder-api-0\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.092714 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.178837 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.252190 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.326817 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-nb\") pod \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.327153 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-svc\") pod \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.327184 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-swift-storage-0\") pod \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.327220 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-sb\") pod \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.327358 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9p5c\" (UniqueName: \"kubernetes.io/projected/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-kube-api-access-h9p5c\") pod \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.327391 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-config\") pod \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\" (UID: \"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5\") " Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.336960 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-kube-api-access-h9p5c" (OuterVolumeSpecName: "kube-api-access-h9p5c") pod "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" (UID: "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5"). InnerVolumeSpecName "kube-api-access-h9p5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.348928 5049 generic.go:334] "Generic (PLEG): container finished" podID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" containerID="810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463" exitCode=0 Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.348997 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.348987 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" event={"ID":"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5","Type":"ContainerDied","Data":"810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463"} Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.349071 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pf7p5" event={"ID":"69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5","Type":"ContainerDied","Data":"d28de34b7af17ea462c9c508e4a53a29eec7cd23446aa565287efbc9d3885e86"} Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.349092 5049 scope.go:117] "RemoveContainer" containerID="810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.367924 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="690eb8dd99a38db0e2d128dc8fae0eb0e7ee256d3467527d01896edbadf9fc55" exitCode=0 Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.370289 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"690eb8dd99a38db0e2d128dc8fae0eb0e7ee256d3467527d01896edbadf9fc55"} Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.370378 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"081b74af340b03286f3b46d2254afe32fe4625cc1e5446a6c08c340a2428ad40"} Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.411350 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" (UID: "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.411464 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-config" (OuterVolumeSpecName: "config") pod "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" (UID: "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.431135 5049 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.431155 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9p5c\" (UniqueName: \"kubernetes.io/projected/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-kube-api-access-h9p5c\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.431165 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.442061 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" (UID: "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.444889 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" (UID: "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.470684 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" (UID: "69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.480400 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.538111 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.538157 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.538169 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.551775 5049 scope.go:117] "RemoveContainer" containerID="1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.575126 5049 scope.go:117] "RemoveContainer" containerID="810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463" Jan 27 17:18:18 crc kubenswrapper[5049]: E0127 17:18:18.581161 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463\": container with ID starting with 810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463 not found: ID does not exist" containerID="810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.581217 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463"} err="failed to get container status \"810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463\": rpc error: code = NotFound desc = could not find container \"810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463\": container with ID starting with 810f682ef552720fc54a96b4be144f36725cb146f2d0e2ce324bc8a2de5a1463 not found: ID does not exist" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.581253 5049 scope.go:117] "RemoveContainer" containerID="1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a" Jan 27 17:18:18 crc kubenswrapper[5049]: E0127 17:18:18.582007 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a\": container with ID starting with 1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a not found: ID does not exist" containerID="1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.582047 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a"} err="failed to get container status \"1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a\": rpc error: code = NotFound desc = could not find container \"1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a\": container with ID starting with 1001b669067845e02e790e3bb2d03d51aec8467676b42281b200fafb35c4838a not found: ID does not exist" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.582077 5049 scope.go:117] "RemoveContainer" containerID="6ad01eb278d8a66889a11fa84f093b411a8a38e169a31c62b60f821c2f9f05b1" Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.646235 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hjxkn"] Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.699707 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pf7p5"] Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.717141 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pf7p5"] Jan 27 17:18:18 crc kubenswrapper[5049]: I0127 17:18:18.804835 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:18 crc kubenswrapper[5049]: W0127 17:18:18.814061 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb0a6987_5872_49b3_90a3_2edbc359e4f7.slice/crio-1bcaebefd5c47885c37fd7424d20936a58310149cd65bb307e59513ceacfd572 WatchSource:0}: Error finding container 1bcaebefd5c47885c37fd7424d20936a58310149cd65bb307e59513ceacfd572: Status 404 returned error can't find the container with id 1bcaebefd5c47885c37fd7424d20936a58310149cd65bb307e59513ceacfd572 Jan 27 17:18:19 crc kubenswrapper[5049]: I0127 17:18:19.388307 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6","Type":"ContainerStarted","Data":"fe09db5dec24738b86c337aa662fce218f396e0b2f9516e82037b178a1353f12"} Jan 27 17:18:19 crc kubenswrapper[5049]: I0127 17:18:19.405938 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0a6987-5872-49b3-90a3-2edbc359e4f7","Type":"ContainerStarted","Data":"1bcaebefd5c47885c37fd7424d20936a58310149cd65bb307e59513ceacfd572"} Jan 27 17:18:19 crc kubenswrapper[5049]: I0127 17:18:19.408044 5049 generic.go:334] "Generic (PLEG): container finished" podID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" containerID="db361407212f361d4bd640c5d3094d114a1f36086e617ff02576af1e7a67a5a9" exitCode=0 Jan 27 17:18:19 crc kubenswrapper[5049]: I0127 17:18:19.408155 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" event={"ID":"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325","Type":"ContainerDied","Data":"db361407212f361d4bd640c5d3094d114a1f36086e617ff02576af1e7a67a5a9"} Jan 27 17:18:19 crc kubenswrapper[5049]: I0127 17:18:19.409253 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" event={"ID":"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325","Type":"ContainerStarted","Data":"4fd885b5b468e56cad03667ddc6a3b7c5a42c486208349e9ac5f67a7aa80113e"} Jan 27 17:18:19 crc kubenswrapper[5049]: I0127 17:18:19.659075 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" path="/var/lib/kubelet/pods/69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5/volumes" Jan 27 17:18:19 crc kubenswrapper[5049]: I0127 17:18:19.867248 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66dcf5f4cb-7jts2" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:51368->10.217.0.156:9311: read: connection reset by peer" Jan 27 17:18:19 crc kubenswrapper[5049]: I0127 17:18:19.868180 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-66dcf5f4cb-7jts2" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:51366->10.217.0.156:9311: read: connection reset by peer" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.330084 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.428654 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.453745 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0a6987-5872-49b3-90a3-2edbc359e4f7","Type":"ContainerStarted","Data":"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7"} Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.458957 5049 generic.go:334] "Generic (PLEG): container finished" podID="53e4e251-30ea-4628-9490-d88425a297ce" containerID="1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af" exitCode=0 Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.459053 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dcf5f4cb-7jts2" event={"ID":"53e4e251-30ea-4628-9490-d88425a297ce","Type":"ContainerDied","Data":"1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af"} Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.459067 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66dcf5f4cb-7jts2" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.459085 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dcf5f4cb-7jts2" event={"ID":"53e4e251-30ea-4628-9490-d88425a297ce","Type":"ContainerDied","Data":"12493c0000b71cd932e93ed82cd14bb85ddf2a5900d2f5c84ff3365f2319945a"} Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.459106 5049 scope.go:117] "RemoveContainer" containerID="1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.465029 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" event={"ID":"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325","Type":"ContainerStarted","Data":"1bfb28b1f8e7b3385be794ecd9908288adbec16d1d339ee1e288f822b69582a1"} Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.467287 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.469533 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6","Type":"ContainerStarted","Data":"37eef8412dee9d499990a8e1dfa7480c3c144ef89bfbebfc6458101ac22e7caf"} Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.490783 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data\") pod \"53e4e251-30ea-4628-9490-d88425a297ce\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.490925 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e4e251-30ea-4628-9490-d88425a297ce-logs\") pod \"53e4e251-30ea-4628-9490-d88425a297ce\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.490995 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnc29\" (UniqueName: \"kubernetes.io/projected/53e4e251-30ea-4628-9490-d88425a297ce-kube-api-access-cnc29\") pod \"53e4e251-30ea-4628-9490-d88425a297ce\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.491047 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data-custom\") pod \"53e4e251-30ea-4628-9490-d88425a297ce\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.492701 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-combined-ca-bundle\") pod \"53e4e251-30ea-4628-9490-d88425a297ce\" (UID: \"53e4e251-30ea-4628-9490-d88425a297ce\") " Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.494030 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e4e251-30ea-4628-9490-d88425a297ce-logs" (OuterVolumeSpecName: "logs") pod "53e4e251-30ea-4628-9490-d88425a297ce" (UID: "53e4e251-30ea-4628-9490-d88425a297ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.499036 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e4e251-30ea-4628-9490-d88425a297ce-kube-api-access-cnc29" (OuterVolumeSpecName: "kube-api-access-cnc29") pod "53e4e251-30ea-4628-9490-d88425a297ce" (UID: "53e4e251-30ea-4628-9490-d88425a297ce"). InnerVolumeSpecName "kube-api-access-cnc29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.506947 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "53e4e251-30ea-4628-9490-d88425a297ce" (UID: "53e4e251-30ea-4628-9490-d88425a297ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.511949 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" podStartSLOduration=3.511750261 podStartE2EDuration="3.511750261s" podCreationTimestamp="2026-01-27 17:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:20.488895017 +0000 UTC m=+1275.587868566" watchObservedRunningTime="2026-01-27 17:18:20.511750261 +0000 UTC m=+1275.610723810" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.535285 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53e4e251-30ea-4628-9490-d88425a297ce" (UID: "53e4e251-30ea-4628-9490-d88425a297ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.541064 5049 scope.go:117] "RemoveContainer" containerID="4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.575214 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data" (OuterVolumeSpecName: "config-data") pod "53e4e251-30ea-4628-9490-d88425a297ce" (UID: "53e4e251-30ea-4628-9490-d88425a297ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.610504 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.610537 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.610548 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e4e251-30ea-4628-9490-d88425a297ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.610559 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e4e251-30ea-4628-9490-d88425a297ce-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.610569 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnc29\" (UniqueName: \"kubernetes.io/projected/53e4e251-30ea-4628-9490-d88425a297ce-kube-api-access-cnc29\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.646543 5049 scope.go:117] "RemoveContainer" containerID="1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af" Jan 27 17:18:20 crc kubenswrapper[5049]: E0127 17:18:20.647331 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af\": container with ID starting with 1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af not found: ID does not exist" containerID="1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.647384 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af"} err="failed to get container status \"1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af\": rpc error: code = NotFound desc = could not find container \"1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af\": container with ID starting with 1a6e777297c190452814bc9c3ab5451de8be3afe6a0f7e2abb6ad261030154af not found: ID does not exist" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.647412 5049 scope.go:117] "RemoveContainer" containerID="4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8" Jan 27 17:18:20 crc kubenswrapper[5049]: E0127 17:18:20.648437 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8\": container with ID starting with 4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8 not found: ID does not exist" containerID="4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.648477 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8"} err="failed to get container status \"4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8\": rpc error: code = NotFound desc = could not find container \"4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8\": container with ID starting with 4495a59b101f2f8e71cb450e895a12da1a97156a119aeceb32ed7d78686b04a8 not found: ID does not exist" Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.800963 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-66dcf5f4cb-7jts2"] Jan 27 17:18:20 crc kubenswrapper[5049]: I0127 17:18:20.812029 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-66dcf5f4cb-7jts2"] Jan 27 17:18:21 crc kubenswrapper[5049]: I0127 17:18:21.478890 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6","Type":"ContainerStarted","Data":"da98de6f9211c5ba3a762c186677de3de43cc28989a3f2abab6d7470e5f0c4fb"} Jan 27 17:18:21 crc kubenswrapper[5049]: I0127 17:18:21.481613 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0a6987-5872-49b3-90a3-2edbc359e4f7","Type":"ContainerStarted","Data":"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf"} Jan 27 17:18:21 crc kubenswrapper[5049]: I0127 17:18:21.481743 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerName="cinder-api-log" containerID="cri-o://bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7" gracePeriod=30 Jan 27 17:18:21 crc kubenswrapper[5049]: I0127 17:18:21.481937 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 17:18:21 crc kubenswrapper[5049]: I0127 17:18:21.481972 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerName="cinder-api" containerID="cri-o://c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf" gracePeriod=30 Jan 27 17:18:21 crc kubenswrapper[5049]: I0127 17:18:21.503256 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.587675242 podStartE2EDuration="4.503234174s" podCreationTimestamp="2026-01-27 17:18:17 +0000 UTC" firstStartedPulling="2026-01-27 17:18:18.432323407 +0000 UTC m=+1273.531296956" lastFinishedPulling="2026-01-27 17:18:19.347882349 +0000 UTC m=+1274.446855888" observedRunningTime="2026-01-27 17:18:21.498001834 +0000 UTC m=+1276.596975393" watchObservedRunningTime="2026-01-27 17:18:21.503234174 +0000 UTC m=+1276.602207733" Jan 27 17:18:21 crc kubenswrapper[5049]: I0127 17:18:21.530832 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.530815033 podStartE2EDuration="4.530815033s" podCreationTimestamp="2026-01-27 17:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:21.525907523 +0000 UTC m=+1276.624881072" watchObservedRunningTime="2026-01-27 17:18:21.530815033 +0000 UTC m=+1276.629788582" Jan 27 17:18:21 crc kubenswrapper[5049]: I0127 17:18:21.655959 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e4e251-30ea-4628-9490-d88425a297ce" path="/var/lib/kubelet/pods/53e4e251-30ea-4628-9490-d88425a297ce/volumes" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.100511 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.133959 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-scripts\") pod \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.134035 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0a6987-5872-49b3-90a3-2edbc359e4f7-etc-machine-id\") pod \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.134127 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb0a6987-5872-49b3-90a3-2edbc359e4f7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fb0a6987-5872-49b3-90a3-2edbc359e4f7" (UID: "fb0a6987-5872-49b3-90a3-2edbc359e4f7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.134209 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0a6987-5872-49b3-90a3-2edbc359e4f7-logs\") pod \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.134243 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data-custom\") pod \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.134792 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb0a6987-5872-49b3-90a3-2edbc359e4f7-logs" (OuterVolumeSpecName: "logs") pod "fb0a6987-5872-49b3-90a3-2edbc359e4f7" (UID: "fb0a6987-5872-49b3-90a3-2edbc359e4f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.134909 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data\") pod \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.134944 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-combined-ca-bundle\") pod \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.135304 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q6kf\" (UniqueName: \"kubernetes.io/projected/fb0a6987-5872-49b3-90a3-2edbc359e4f7-kube-api-access-6q6kf\") pod \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\" (UID: \"fb0a6987-5872-49b3-90a3-2edbc359e4f7\") " Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.136090 5049 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fb0a6987-5872-49b3-90a3-2edbc359e4f7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.136111 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb0a6987-5872-49b3-90a3-2edbc359e4f7-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.143871 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb0a6987-5872-49b3-90a3-2edbc359e4f7-kube-api-access-6q6kf" (OuterVolumeSpecName: "kube-api-access-6q6kf") pod "fb0a6987-5872-49b3-90a3-2edbc359e4f7" (UID: "fb0a6987-5872-49b3-90a3-2edbc359e4f7"). InnerVolumeSpecName "kube-api-access-6q6kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.154819 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fb0a6987-5872-49b3-90a3-2edbc359e4f7" (UID: "fb0a6987-5872-49b3-90a3-2edbc359e4f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.159446 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-scripts" (OuterVolumeSpecName: "scripts") pod "fb0a6987-5872-49b3-90a3-2edbc359e4f7" (UID: "fb0a6987-5872-49b3-90a3-2edbc359e4f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.182017 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb0a6987-5872-49b3-90a3-2edbc359e4f7" (UID: "fb0a6987-5872-49b3-90a3-2edbc359e4f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.217969 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data" (OuterVolumeSpecName: "config-data") pod "fb0a6987-5872-49b3-90a3-2edbc359e4f7" (UID: "fb0a6987-5872-49b3-90a3-2edbc359e4f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.238910 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.238961 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.238981 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.238997 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb0a6987-5872-49b3-90a3-2edbc359e4f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.239015 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q6kf\" (UniqueName: \"kubernetes.io/projected/fb0a6987-5872-49b3-90a3-2edbc359e4f7-kube-api-access-6q6kf\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.493936 5049 generic.go:334] "Generic (PLEG): container finished" podID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerID="c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf" exitCode=0 Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.493967 5049 generic.go:334] "Generic (PLEG): container finished" podID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerID="bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7" exitCode=143 Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.494769 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.496971 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0a6987-5872-49b3-90a3-2edbc359e4f7","Type":"ContainerDied","Data":"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf"} Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.497026 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0a6987-5872-49b3-90a3-2edbc359e4f7","Type":"ContainerDied","Data":"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7"} Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.497043 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fb0a6987-5872-49b3-90a3-2edbc359e4f7","Type":"ContainerDied","Data":"1bcaebefd5c47885c37fd7424d20936a58310149cd65bb307e59513ceacfd572"} Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.497063 5049 scope.go:117] "RemoveContainer" containerID="c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.523182 5049 scope.go:117] "RemoveContainer" containerID="bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.542978 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.552952 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.566924 5049 scope.go:117] "RemoveContainer" containerID="c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf" Jan 27 17:18:22 crc kubenswrapper[5049]: E0127 17:18:22.567347 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf\": container with ID starting with c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf not found: ID does not exist" containerID="c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.567382 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf"} err="failed to get container status \"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf\": rpc error: code = NotFound desc = could not find container \"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf\": container with ID starting with c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf not found: ID does not exist" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.567406 5049 scope.go:117] "RemoveContainer" containerID="bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7" Jan 27 17:18:22 crc kubenswrapper[5049]: E0127 17:18:22.567772 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7\": container with ID starting with bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7 not found: ID does not exist" containerID="bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.567810 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7"} err="failed to get container status \"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7\": rpc error: code = NotFound desc = could not find container \"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7\": container with ID starting with bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7 not found: ID does not exist" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.567828 5049 scope.go:117] "RemoveContainer" containerID="c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.568590 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf"} err="failed to get container status \"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf\": rpc error: code = NotFound desc = could not find container \"c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf\": container with ID starting with c2c66a7b03b2448dd1afc326caf19f36a9f4a7f729f4bb939d1a6ee2ffd12ecf not found: ID does not exist" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.568622 5049 scope.go:117] "RemoveContainer" containerID="bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.568913 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7"} err="failed to get container status \"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7\": rpc error: code = NotFound desc = could not find container \"bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7\": container with ID starting with bc1d34eb9d7e76e97a507831924f1ea671e71993be28103b7ce4e479bbffcaf7 not found: ID does not exist" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.579815 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:22 crc kubenswrapper[5049]: E0127 17:18:22.580232 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580252 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api" Jan 27 17:18:22 crc kubenswrapper[5049]: E0127 17:18:22.580266 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" containerName="init" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580273 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" containerName="init" Jan 27 17:18:22 crc kubenswrapper[5049]: E0127 17:18:22.580286 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerName="cinder-api" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580292 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerName="cinder-api" Jan 27 17:18:22 crc kubenswrapper[5049]: E0127 17:18:22.580312 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerName="cinder-api-log" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580318 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerName="cinder-api-log" Jan 27 17:18:22 crc kubenswrapper[5049]: E0127 17:18:22.580326 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" containerName="dnsmasq-dns" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580332 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" containerName="dnsmasq-dns" Jan 27 17:18:22 crc kubenswrapper[5049]: E0127 17:18:22.580345 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api-log" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580351 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api-log" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580504 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bdda1c-a0d2-4127-8d2d-9e1c1887e2a5" containerName="dnsmasq-dns" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580525 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api-log" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580534 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerName="cinder-api-log" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580553 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" containerName="cinder-api" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.580560 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e4e251-30ea-4628-9490-d88425a297ce" containerName="barbican-api" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.581508 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.583409 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.585346 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.586969 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.587392 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645236 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-logs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645400 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data-custom\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645426 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-public-tls-certs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645448 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645480 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-scripts\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645497 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645569 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjf9d\" (UniqueName: \"kubernetes.io/projected/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-kube-api-access-zjf9d\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645641 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.645691 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747009 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data-custom\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747119 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-public-tls-certs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747144 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747207 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-scripts\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747239 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747274 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjf9d\" (UniqueName: \"kubernetes.io/projected/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-kube-api-access-zjf9d\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747361 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747383 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.747408 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-logs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.748033 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-logs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.748840 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-etc-machine-id\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.751420 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-public-tls-certs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.752981 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.753038 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-scripts\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.754303 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data-custom\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.763656 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.765393 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.769928 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjf9d\" (UniqueName: \"kubernetes.io/projected/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-kube-api-access-zjf9d\") pod \"cinder-api-0\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " pod="openstack/cinder-api-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.868133 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 17:18:22 crc kubenswrapper[5049]: I0127 17:18:22.941475 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 17:18:23 crc kubenswrapper[5049]: I0127 17:18:23.418945 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:18:23 crc kubenswrapper[5049]: W0127 17:18:23.422424 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod492cb82e_33fb_4fc7_85e2_7d4285e5ff00.slice/crio-290e2863175e2ea7918f790a1950d0f1dc7fa78c1e79d5e8a5400747209cb336 WatchSource:0}: Error finding container 290e2863175e2ea7918f790a1950d0f1dc7fa78c1e79d5e8a5400747209cb336: Status 404 returned error can't find the container with id 290e2863175e2ea7918f790a1950d0f1dc7fa78c1e79d5e8a5400747209cb336 Jan 27 17:18:23 crc kubenswrapper[5049]: I0127 17:18:23.507710 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492cb82e-33fb-4fc7-85e2-7d4285e5ff00","Type":"ContainerStarted","Data":"290e2863175e2ea7918f790a1950d0f1dc7fa78c1e79d5e8a5400747209cb336"} Jan 27 17:18:23 crc kubenswrapper[5049]: I0127 17:18:23.660957 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb0a6987-5872-49b3-90a3-2edbc359e4f7" path="/var/lib/kubelet/pods/fb0a6987-5872-49b3-90a3-2edbc359e4f7/volumes" Jan 27 17:18:24 crc kubenswrapper[5049]: I0127 17:18:24.526768 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492cb82e-33fb-4fc7-85e2-7d4285e5ff00","Type":"ContainerStarted","Data":"3113dcce28048dab388fa9369937d0bd0a1fc6c1ae5f9d46acfb897247e15c0d"} Jan 27 17:18:25 crc kubenswrapper[5049]: I0127 17:18:25.536690 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492cb82e-33fb-4fc7-85e2-7d4285e5ff00","Type":"ContainerStarted","Data":"166079e95268cb7e35e7bb3173c1768e058053e781153b7e92d90749146e26bf"} Jan 27 17:18:25 crc kubenswrapper[5049]: I0127 17:18:25.537335 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 17:18:25 crc kubenswrapper[5049]: I0127 17:18:25.567296 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.567272559 podStartE2EDuration="3.567272559s" podCreationTimestamp="2026-01-27 17:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:25.554802682 +0000 UTC m=+1280.653776241" watchObservedRunningTime="2026-01-27 17:18:25.567272559 +0000 UTC m=+1280.666246108" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.094802 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.173948 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-shbs4"] Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.174219 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" podUID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" containerName="dnsmasq-dns" containerID="cri-o://f74f8d86b5d917c3b3e9b8c0946231f3afed8fd1c67e75447ee4e5e4ddfec1fa" gracePeriod=10 Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.181972 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.253823 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.578392 5049 generic.go:334] "Generic (PLEG): container finished" podID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" containerID="f74f8d86b5d917c3b3e9b8c0946231f3afed8fd1c67e75447ee4e5e4ddfec1fa" exitCode=0 Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.578488 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" event={"ID":"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e","Type":"ContainerDied","Data":"f74f8d86b5d917c3b3e9b8c0946231f3afed8fd1c67e75447ee4e5e4ddfec1fa"} Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.578698 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerName="cinder-scheduler" containerID="cri-o://37eef8412dee9d499990a8e1dfa7480c3c144ef89bfbebfc6458101ac22e7caf" gracePeriod=30 Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.578845 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerName="probe" containerID="cri-o://da98de6f9211c5ba3a762c186677de3de43cc28989a3f2abab6d7470e5f0c4fb" gracePeriod=30 Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.737297 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.780794 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-config\") pod \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.780849 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-sb\") pod \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.780981 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-nb\") pod \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.781118 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-svc\") pod \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.781165 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxsgb\" (UniqueName: \"kubernetes.io/projected/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-kube-api-access-qxsgb\") pod \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.781216 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-swift-storage-0\") pod \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\" (UID: \"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e\") " Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.796860 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-kube-api-access-qxsgb" (OuterVolumeSpecName: "kube-api-access-qxsgb") pod "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" (UID: "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e"). InnerVolumeSpecName "kube-api-access-qxsgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.852700 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" (UID: "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.868392 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" (UID: "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.870524 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" (UID: "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.881491 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-config" (OuterVolumeSpecName: "config") pod "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" (UID: "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.883154 5049 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.883190 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.883203 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.883217 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.883230 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxsgb\" (UniqueName: \"kubernetes.io/projected/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-kube-api-access-qxsgb\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.925229 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" (UID: "3c4a9cea-a9c1-42e9-91a6-50302c39ac9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.971247 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:18:28 crc kubenswrapper[5049]: I0127 17:18:28.985816 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.396398 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.400093 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.591801 5049 generic.go:334] "Generic (PLEG): container finished" podID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerID="da98de6f9211c5ba3a762c186677de3de43cc28989a3f2abab6d7470e5f0c4fb" exitCode=0 Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.591864 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6","Type":"ContainerDied","Data":"da98de6f9211c5ba3a762c186677de3de43cc28989a3f2abab6d7470e5f0c4fb"} Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.596414 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.598732 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-shbs4" event={"ID":"3c4a9cea-a9c1-42e9-91a6-50302c39ac9e","Type":"ContainerDied","Data":"8afdf0ab67810d5a6ff5be6717715290963495436b86d82176e0bd01f50e005a"} Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.598780 5049 scope.go:117] "RemoveContainer" containerID="f74f8d86b5d917c3b3e9b8c0946231f3afed8fd1c67e75447ee4e5e4ddfec1fa" Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.631698 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-shbs4"] Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.633272 5049 scope.go:117] "RemoveContainer" containerID="c365c84b097b4fd100afd8736b2bccf2fd5ac7778d42563e702e2382ffd563b9" Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.639010 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-shbs4"] Jan 27 17:18:29 crc kubenswrapper[5049]: I0127 17:18:29.679027 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" path="/var/lib/kubelet/pods/3c4a9cea-a9c1-42e9-91a6-50302c39ac9e/volumes" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.952969 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 17:18:30 crc kubenswrapper[5049]: E0127 17:18:30.953756 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" containerName="init" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.953774 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" containerName="init" Jan 27 17:18:30 crc kubenswrapper[5049]: E0127 17:18:30.953802 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" containerName="dnsmasq-dns" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.953809 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" containerName="dnsmasq-dns" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.954038 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4a9cea-a9c1-42e9-91a6-50302c39ac9e" containerName="dnsmasq-dns" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.955885 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.958614 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.958667 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.959212 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-nr7fd" Jan 27 17:18:30 crc kubenswrapper[5049]: I0127 17:18:30.969014 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.020313 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config-secret\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.020378 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npbt9\" (UniqueName: \"kubernetes.io/projected/467be3c3-34b2-4cea-8785-bacf5a6a5a39-kube-api-access-npbt9\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.020452 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.020586 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-combined-ca-bundle\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.122722 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config-secret\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.122785 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npbt9\" (UniqueName: \"kubernetes.io/projected/467be3c3-34b2-4cea-8785-bacf5a6a5a39-kube-api-access-npbt9\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.122833 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.122918 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-combined-ca-bundle\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.124044 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.128491 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config-secret\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.128611 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-combined-ca-bundle\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.138168 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npbt9\" (UniqueName: \"kubernetes.io/projected/467be3c3-34b2-4cea-8785-bacf5a6a5a39-kube-api-access-npbt9\") pod \"openstackclient\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.285412 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.622359 5049 generic.go:334] "Generic (PLEG): container finished" podID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerID="37eef8412dee9d499990a8e1dfa7480c3c144ef89bfbebfc6458101ac22e7caf" exitCode=0 Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.622885 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6","Type":"ContainerDied","Data":"37eef8412dee9d499990a8e1dfa7480c3c144ef89bfbebfc6458101ac22e7caf"} Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.701638 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.744432 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data\") pod \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.744635 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-etc-machine-id\") pod \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.744666 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-combined-ca-bundle\") pod \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.744707 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84qhb\" (UniqueName: \"kubernetes.io/projected/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-kube-api-access-84qhb\") pod \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.744757 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data-custom\") pod \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.744818 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-scripts\") pod \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\" (UID: \"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6\") " Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.744764 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" (UID: "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.745361 5049 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.748918 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" (UID: "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.750267 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-kube-api-access-84qhb" (OuterVolumeSpecName: "kube-api-access-84qhb") pod "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" (UID: "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6"). InnerVolumeSpecName "kube-api-access-84qhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.751115 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-scripts" (OuterVolumeSpecName: "scripts") pod "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" (UID: "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.797160 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" (UID: "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.837206 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data" (OuterVolumeSpecName: "config-data") pod "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" (UID: "5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.849826 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.849859 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.849868 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.849879 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84qhb\" (UniqueName: \"kubernetes.io/projected/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-kube-api-access-84qhb\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.849887 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:31 crc kubenswrapper[5049]: I0127 17:18:31.892107 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 17:18:31 crc kubenswrapper[5049]: W0127 17:18:31.895169 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod467be3c3_34b2_4cea_8785_bacf5a6a5a39.slice/crio-50fdaba242febf0c89a5e5b1f1f9299f79e2c92946600c80e613cb5faad63c00 WatchSource:0}: Error finding container 50fdaba242febf0c89a5e5b1f1f9299f79e2c92946600c80e613cb5faad63c00: Status 404 returned error can't find the container with id 50fdaba242febf0c89a5e5b1f1f9299f79e2c92946600c80e613cb5faad63c00 Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.632844 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"467be3c3-34b2-4cea-8785-bacf5a6a5a39","Type":"ContainerStarted","Data":"50fdaba242febf0c89a5e5b1f1f9299f79e2c92946600c80e613cb5faad63c00"} Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.634923 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6","Type":"ContainerDied","Data":"fe09db5dec24738b86c337aa662fce218f396e0b2f9516e82037b178a1353f12"} Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.634962 5049 scope.go:117] "RemoveContainer" containerID="da98de6f9211c5ba3a762c186677de3de43cc28989a3f2abab6d7470e5f0c4fb" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.635089 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.678401 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.682143 5049 scope.go:117] "RemoveContainer" containerID="37eef8412dee9d499990a8e1dfa7480c3c144ef89bfbebfc6458101ac22e7caf" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.692519 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.705856 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:32 crc kubenswrapper[5049]: E0127 17:18:32.706219 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerName="cinder-scheduler" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.706234 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerName="cinder-scheduler" Jan 27 17:18:32 crc kubenswrapper[5049]: E0127 17:18:32.706256 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerName="probe" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.706262 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerName="probe" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.706416 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerName="probe" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.706440 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" containerName="cinder-scheduler" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.707410 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.709619 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.718002 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.765075 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.765174 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.765216 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.765250 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzptm\" (UniqueName: \"kubernetes.io/projected/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-kube-api-access-lzptm\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.765283 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.765349 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.866651 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.866717 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.866743 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.866767 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzptm\" (UniqueName: \"kubernetes.io/projected/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-kube-api-access-lzptm\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.866785 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.866818 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.867644 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.874418 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.875181 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.882882 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.886863 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzptm\" (UniqueName: \"kubernetes.io/projected/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-kube-api-access-lzptm\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:32 crc kubenswrapper[5049]: I0127 17:18:32.889208 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " pod="openstack/cinder-scheduler-0" Jan 27 17:18:33 crc kubenswrapper[5049]: I0127 17:18:33.068246 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 17:18:33 crc kubenswrapper[5049]: I0127 17:18:33.560218 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:18:33 crc kubenswrapper[5049]: I0127 17:18:33.680602 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6" path="/var/lib/kubelet/pods/5fa7a6fa-16a5-47f1-ab73-cb78f197a8c6/volumes" Jan 27 17:18:33 crc kubenswrapper[5049]: I0127 17:18:33.681450 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d76a4d6-b3a5-4931-9fb1-13531143ebaa","Type":"ContainerStarted","Data":"045e10e4a3d1c58210da9330a8483667bd509d77dfc2cf6efe56ef6f872afb29"} Jan 27 17:18:34 crc kubenswrapper[5049]: I0127 17:18:34.035883 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:34 crc kubenswrapper[5049]: I0127 17:18:34.689611 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d76a4d6-b3a5-4931-9fb1-13531143ebaa","Type":"ContainerStarted","Data":"1ad029fde4bbfe950ea64c277ff2274dbaf9c65f928fb3d6f8c204dfa84b5ab2"} Jan 27 17:18:35 crc kubenswrapper[5049]: I0127 17:18:35.362569 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 17:18:35 crc kubenswrapper[5049]: I0127 17:18:35.701964 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d76a4d6-b3a5-4931-9fb1-13531143ebaa","Type":"ContainerStarted","Data":"86ae2065456e0be818d0d2f291c75fa54963a08b110e6a19c05980e7f58e4078"} Jan 27 17:18:35 crc kubenswrapper[5049]: I0127 17:18:35.724228 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.7242125809999997 podStartE2EDuration="3.724212581s" podCreationTimestamp="2026-01-27 17:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:35.72069164 +0000 UTC m=+1290.819665189" watchObservedRunningTime="2026-01-27 17:18:35.724212581 +0000 UTC m=+1290.823186130" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.071374 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6dbbddc5bc-5k4jm"] Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.074815 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.085689 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.086390 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.086883 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.101711 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6dbbddc5bc-5k4jm"] Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.230859 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-public-tls-certs\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.230899 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-combined-ca-bundle\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.230930 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-etc-swift\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.230949 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-config-data\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.231083 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-run-httpd\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.231154 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-internal-tls-certs\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.231178 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdffl\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-kube-api-access-tdffl\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.231246 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-log-httpd\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.332990 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-run-httpd\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.333090 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-internal-tls-certs\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.333120 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdffl\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-kube-api-access-tdffl\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.333164 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-log-httpd\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.333192 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-public-tls-certs\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.333212 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-combined-ca-bundle\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.333234 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-etc-swift\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.333250 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-config-data\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.334223 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-run-httpd\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.334398 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-log-httpd\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.341285 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-public-tls-certs\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.341775 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-internal-tls-certs\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.342286 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-combined-ca-bundle\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.342997 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-config-data\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.344733 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-etc-swift\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.355519 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdffl\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-kube-api-access-tdffl\") pod \"swift-proxy-6dbbddc5bc-5k4jm\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.397106 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.822131 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.881709 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d99c987d8-s77jv"] Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.881964 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d99c987d8-s77jv" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerName="neutron-api" containerID="cri-o://95ebf3fa581f6b2ef306585f39b1a074437c276cccd966b6119dbd09f2d3a5ec" gracePeriod=30 Jan 27 17:18:36 crc kubenswrapper[5049]: I0127 17:18:36.882400 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d99c987d8-s77jv" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerName="neutron-httpd" containerID="cri-o://9ade653a6243dab269f655544c3e6b332582ce038ff0bc3d90f5a5dc8b566b5d" gracePeriod=30 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.047598 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6dbbddc5bc-5k4jm"] Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.180175 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.180492 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="ceilometer-central-agent" containerID="cri-o://cc854d20296f557cd4d5d9bb19c6c03d1e1ce6eba46ab19dccf9ef4e6ff34dc3" gracePeriod=30 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.181060 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="proxy-httpd" containerID="cri-o://be3176317fe4a09785260b85b11a6aedb8f5aa434ba3039a57f442778bacddf2" gracePeriod=30 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.181136 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="sg-core" containerID="cri-o://02693659af0a8f83e6f343ad384ad7898ed03d93a76b5ac9b9e8d3acabfc953c" gracePeriod=30 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.181190 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="ceilometer-notification-agent" containerID="cri-o://54661e2c07ff58f9fc9388a5b237bae129bcfeaff2a90f47c3ff548a2c5e3364" gracePeriod=30 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.195835 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": EOF" Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.775638 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" event={"ID":"a923a49d-7e17-40a5-975a-9f4a39f92d51","Type":"ContainerStarted","Data":"7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215"} Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.775995 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" event={"ID":"a923a49d-7e17-40a5-975a-9f4a39f92d51","Type":"ContainerStarted","Data":"d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789"} Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.776007 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" event={"ID":"a923a49d-7e17-40a5-975a-9f4a39f92d51","Type":"ContainerStarted","Data":"b86e3decd4dd56125a0ab54fe5423fbef05fc062938cc71e70bb55fc5a87a815"} Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.779305 5049 generic.go:334] "Generic (PLEG): container finished" podID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerID="9ade653a6243dab269f655544c3e6b332582ce038ff0bc3d90f5a5dc8b566b5d" exitCode=0 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.779372 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d99c987d8-s77jv" event={"ID":"43c1b33a-6a3f-41d0-9df3-08eb35e89315","Type":"ContainerDied","Data":"9ade653a6243dab269f655544c3e6b332582ce038ff0bc3d90f5a5dc8b566b5d"} Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.782751 5049 generic.go:334] "Generic (PLEG): container finished" podID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerID="be3176317fe4a09785260b85b11a6aedb8f5aa434ba3039a57f442778bacddf2" exitCode=0 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.782777 5049 generic.go:334] "Generic (PLEG): container finished" podID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerID="02693659af0a8f83e6f343ad384ad7898ed03d93a76b5ac9b9e8d3acabfc953c" exitCode=2 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.782784 5049 generic.go:334] "Generic (PLEG): container finished" podID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerID="cc854d20296f557cd4d5d9bb19c6c03d1e1ce6eba46ab19dccf9ef4e6ff34dc3" exitCode=0 Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.782805 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerDied","Data":"be3176317fe4a09785260b85b11a6aedb8f5aa434ba3039a57f442778bacddf2"} Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.782832 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerDied","Data":"02693659af0a8f83e6f343ad384ad7898ed03d93a76b5ac9b9e8d3acabfc953c"} Jan 27 17:18:37 crc kubenswrapper[5049]: I0127 17:18:37.782842 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerDied","Data":"cc854d20296f557cd4d5d9bb19c6c03d1e1ce6eba46ab19dccf9ef4e6ff34dc3"} Jan 27 17:18:38 crc kubenswrapper[5049]: I0127 17:18:38.069718 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 17:18:38 crc kubenswrapper[5049]: I0127 17:18:38.794576 5049 generic.go:334] "Generic (PLEG): container finished" podID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerID="95ebf3fa581f6b2ef306585f39b1a074437c276cccd966b6119dbd09f2d3a5ec" exitCode=0 Jan 27 17:18:38 crc kubenswrapper[5049]: I0127 17:18:38.794661 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d99c987d8-s77jv" event={"ID":"43c1b33a-6a3f-41d0-9df3-08eb35e89315","Type":"ContainerDied","Data":"95ebf3fa581f6b2ef306585f39b1a074437c276cccd966b6119dbd09f2d3a5ec"} Jan 27 17:18:38 crc kubenswrapper[5049]: I0127 17:18:38.795444 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:38 crc kubenswrapper[5049]: I0127 17:18:38.795477 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:41 crc kubenswrapper[5049]: I0127 17:18:41.836300 5049 generic.go:334] "Generic (PLEG): container finished" podID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerID="ea17395c3b3bc8724181c64f6fb84ecf9ae264353cc2faa255466f7ac04ec368" exitCode=137 Jan 27 17:18:41 crc kubenswrapper[5049]: I0127 17:18:41.836880 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b447db58f-bdtv6" event={"ID":"ca9c6d10-6357-4632-9c0f-ff477e8526f0","Type":"ContainerDied","Data":"ea17395c3b3bc8724181c64f6fb84ecf9ae264353cc2faa255466f7ac04ec368"} Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.321262 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" podStartSLOduration=6.32123835 podStartE2EDuration="6.32123835s" podCreationTimestamp="2026-01-27 17:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:37.802147071 +0000 UTC m=+1292.901120620" watchObservedRunningTime="2026-01-27 17:18:42.32123835 +0000 UTC m=+1297.420211899" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.332927 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xh8l6"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.334002 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.345098 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xh8l6"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.423338 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-tnr58"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.424717 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.434083 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tnr58"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.437251 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-operator-scripts\") pod \"nova-api-db-create-xh8l6\" (UID: \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\") " pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.437393 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kprd8\" (UniqueName: \"kubernetes.io/projected/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-kube-api-access-kprd8\") pod \"nova-api-db-create-xh8l6\" (UID: \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\") " pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.536726 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d95d-account-create-update-ggmht"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.537999 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.539780 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-operator-scripts\") pod \"nova-api-db-create-xh8l6\" (UID: \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\") " pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.540642 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.540829 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-operator-scripts\") pod \"nova-api-db-create-xh8l6\" (UID: \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\") " pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.540894 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kprd8\" (UniqueName: \"kubernetes.io/projected/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-kube-api-access-kprd8\") pod \"nova-api-db-create-xh8l6\" (UID: \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\") " pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.540940 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78rgv\" (UniqueName: \"kubernetes.io/projected/255716e0-246f-4167-a784-df7005bade5d-kube-api-access-78rgv\") pod \"nova-cell0-db-create-tnr58\" (UID: \"255716e0-246f-4167-a784-df7005bade5d\") " pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.541076 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255716e0-246f-4167-a784-df7005bade5d-operator-scripts\") pod \"nova-cell0-db-create-tnr58\" (UID: \"255716e0-246f-4167-a784-df7005bade5d\") " pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.548721 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d95d-account-create-update-ggmht"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.575008 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kprd8\" (UniqueName: \"kubernetes.io/projected/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-kube-api-access-kprd8\") pod \"nova-api-db-create-xh8l6\" (UID: \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\") " pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.627814 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": dial tcp 10.217.0.162:3000: connect: connection refused" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.632929 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-sqdsx"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.634140 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.642781 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78rgv\" (UniqueName: \"kubernetes.io/projected/255716e0-246f-4167-a784-df7005bade5d-kube-api-access-78rgv\") pod \"nova-cell0-db-create-tnr58\" (UID: \"255716e0-246f-4167-a784-df7005bade5d\") " pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.642878 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255716e0-246f-4167-a784-df7005bade5d-operator-scripts\") pod \"nova-cell0-db-create-tnr58\" (UID: \"255716e0-246f-4167-a784-df7005bade5d\") " pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.642997 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldqjj\" (UniqueName: \"kubernetes.io/projected/dd9b47ed-4021-4981-8975-d8af2c7d80ce-kube-api-access-ldqjj\") pod \"nova-api-d95d-account-create-update-ggmht\" (UID: \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\") " pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.643027 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9b47ed-4021-4981-8975-d8af2c7d80ce-operator-scripts\") pod \"nova-api-d95d-account-create-update-ggmht\" (UID: \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\") " pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.643144 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sqdsx"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.644194 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255716e0-246f-4167-a784-df7005bade5d-operator-scripts\") pod \"nova-cell0-db-create-tnr58\" (UID: \"255716e0-246f-4167-a784-df7005bade5d\") " pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.653285 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.666324 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78rgv\" (UniqueName: \"kubernetes.io/projected/255716e0-246f-4167-a784-df7005bade5d-kube-api-access-78rgv\") pod \"nova-cell0-db-create-tnr58\" (UID: \"255716e0-246f-4167-a784-df7005bade5d\") " pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.731374 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-80af-account-create-update-9lhmg"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.732775 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.735217 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.751885 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9qrs\" (UniqueName: \"kubernetes.io/projected/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-kube-api-access-h9qrs\") pod \"nova-cell1-db-create-sqdsx\" (UID: \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\") " pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.751959 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldqjj\" (UniqueName: \"kubernetes.io/projected/dd9b47ed-4021-4981-8975-d8af2c7d80ce-kube-api-access-ldqjj\") pod \"nova-api-d95d-account-create-update-ggmht\" (UID: \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\") " pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.751986 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9b47ed-4021-4981-8975-d8af2c7d80ce-operator-scripts\") pod \"nova-api-d95d-account-create-update-ggmht\" (UID: \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\") " pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.752093 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-operator-scripts\") pod \"nova-cell1-db-create-sqdsx\" (UID: \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\") " pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.752325 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.753383 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9b47ed-4021-4981-8975-d8af2c7d80ce-operator-scripts\") pod \"nova-api-d95d-account-create-update-ggmht\" (UID: \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\") " pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.757927 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-9lhmg"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.778364 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldqjj\" (UniqueName: \"kubernetes.io/projected/dd9b47ed-4021-4981-8975-d8af2c7d80ce-kube-api-access-ldqjj\") pod \"nova-api-d95d-account-create-update-ggmht\" (UID: \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\") " pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.848040 5049 generic.go:334] "Generic (PLEG): container finished" podID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerID="54661e2c07ff58f9fc9388a5b237bae129bcfeaff2a90f47c3ff548a2c5e3364" exitCode=0 Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.848077 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerDied","Data":"54661e2c07ff58f9fc9388a5b237bae129bcfeaff2a90f47c3ff548a2c5e3364"} Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.854231 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bqrk\" (UniqueName: \"kubernetes.io/projected/34ab2df7-a5b7-463d-96c6-b2d208031c97-kube-api-access-2bqrk\") pod \"nova-cell0-80af-account-create-update-9lhmg\" (UID: \"34ab2df7-a5b7-463d-96c6-b2d208031c97\") " pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.854401 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-operator-scripts\") pod \"nova-cell1-db-create-sqdsx\" (UID: \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\") " pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.854634 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9qrs\" (UniqueName: \"kubernetes.io/projected/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-kube-api-access-h9qrs\") pod \"nova-cell1-db-create-sqdsx\" (UID: \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\") " pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.854665 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2df7-a5b7-463d-96c6-b2d208031c97-operator-scripts\") pod \"nova-cell0-80af-account-create-update-9lhmg\" (UID: \"34ab2df7-a5b7-463d-96c6-b2d208031c97\") " pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.856195 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-operator-scripts\") pod \"nova-cell1-db-create-sqdsx\" (UID: \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\") " pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.870271 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9qrs\" (UniqueName: \"kubernetes.io/projected/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-kube-api-access-h9qrs\") pod \"nova-cell1-db-create-sqdsx\" (UID: \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\") " pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.931137 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.941069 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-544e-account-create-update-hnhlj"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.942821 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.944901 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.947161 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-544e-account-create-update-hnhlj"] Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.957558 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2df7-a5b7-463d-96c6-b2d208031c97-operator-scripts\") pod \"nova-cell0-80af-account-create-update-9lhmg\" (UID: \"34ab2df7-a5b7-463d-96c6-b2d208031c97\") " pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.957614 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bqrk\" (UniqueName: \"kubernetes.io/projected/34ab2df7-a5b7-463d-96c6-b2d208031c97-kube-api-access-2bqrk\") pod \"nova-cell0-80af-account-create-update-9lhmg\" (UID: \"34ab2df7-a5b7-463d-96c6-b2d208031c97\") " pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.958703 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2df7-a5b7-463d-96c6-b2d208031c97-operator-scripts\") pod \"nova-cell0-80af-account-create-update-9lhmg\" (UID: \"34ab2df7-a5b7-463d-96c6-b2d208031c97\") " pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.965155 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:42 crc kubenswrapper[5049]: I0127 17:18:42.973364 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bqrk\" (UniqueName: \"kubernetes.io/projected/34ab2df7-a5b7-463d-96c6-b2d208031c97-kube-api-access-2bqrk\") pod \"nova-cell0-80af-account-create-update-9lhmg\" (UID: \"34ab2df7-a5b7-463d-96c6-b2d208031c97\") " pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.059537 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-operator-scripts\") pod \"nova-cell1-544e-account-create-update-hnhlj\" (UID: \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\") " pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.059605 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mbw\" (UniqueName: \"kubernetes.io/projected/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-kube-api-access-b4mbw\") pod \"nova-cell1-544e-account-create-update-hnhlj\" (UID: \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\") " pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.069070 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.161617 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-operator-scripts\") pod \"nova-cell1-544e-account-create-update-hnhlj\" (UID: \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\") " pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.161740 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mbw\" (UniqueName: \"kubernetes.io/projected/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-kube-api-access-b4mbw\") pod \"nova-cell1-544e-account-create-update-hnhlj\" (UID: \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\") " pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.162985 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-operator-scripts\") pod \"nova-cell1-544e-account-create-update-hnhlj\" (UID: \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\") " pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.182192 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mbw\" (UniqueName: \"kubernetes.io/projected/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-kube-api-access-b4mbw\") pod \"nova-cell1-544e-account-create-update-hnhlj\" (UID: \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\") " pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.274449 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 17:18:43 crc kubenswrapper[5049]: I0127 17:18:43.335737 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.346494 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.492250 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9c6d10-6357-4632-9c0f-ff477e8526f0-logs\") pod \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.492387 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-592f9\" (UniqueName: \"kubernetes.io/projected/ca9c6d10-6357-4632-9c0f-ff477e8526f0-kube-api-access-592f9\") pod \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.492539 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-combined-ca-bundle\") pod \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.492563 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data-custom\") pod \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.492629 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data\") pod \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\" (UID: \"ca9c6d10-6357-4632-9c0f-ff477e8526f0\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.492977 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca9c6d10-6357-4632-9c0f-ff477e8526f0-logs" (OuterVolumeSpecName: "logs") pod "ca9c6d10-6357-4632-9c0f-ff477e8526f0" (UID: "ca9c6d10-6357-4632-9c0f-ff477e8526f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.493124 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca9c6d10-6357-4632-9c0f-ff477e8526f0-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.498925 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca9c6d10-6357-4632-9c0f-ff477e8526f0" (UID: "ca9c6d10-6357-4632-9c0f-ff477e8526f0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.499972 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9c6d10-6357-4632-9c0f-ff477e8526f0-kube-api-access-592f9" (OuterVolumeSpecName: "kube-api-access-592f9") pod "ca9c6d10-6357-4632-9c0f-ff477e8526f0" (UID: "ca9c6d10-6357-4632-9c0f-ff477e8526f0"). InnerVolumeSpecName "kube-api-access-592f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.515428 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.522555 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.531039 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca9c6d10-6357-4632-9c0f-ff477e8526f0" (UID: "ca9c6d10-6357-4632-9c0f-ff477e8526f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.550778 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data" (OuterVolumeSpecName: "config-data") pod "ca9c6d10-6357-4632-9c0f-ff477e8526f0" (UID: "ca9c6d10-6357-4632-9c0f-ff477e8526f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595294 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhlnt\" (UniqueName: \"kubernetes.io/projected/43c1b33a-6a3f-41d0-9df3-08eb35e89315-kube-api-access-fhlnt\") pod \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595359 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-scripts\") pod \"0b68f766-1ba0-4041-a963-8d115bacfc30\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595394 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-combined-ca-bundle\") pod \"0b68f766-1ba0-4041-a963-8d115bacfc30\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595458 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-run-httpd\") pod \"0b68f766-1ba0-4041-a963-8d115bacfc30\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595478 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-sg-core-conf-yaml\") pod \"0b68f766-1ba0-4041-a963-8d115bacfc30\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595520 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-config\") pod \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595539 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-ovndb-tls-certs\") pod \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595566 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-httpd-config\") pod \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595597 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-combined-ca-bundle\") pod \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\" (UID: \"43c1b33a-6a3f-41d0-9df3-08eb35e89315\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595621 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hdm7\" (UniqueName: \"kubernetes.io/projected/0b68f766-1ba0-4041-a963-8d115bacfc30-kube-api-access-6hdm7\") pod \"0b68f766-1ba0-4041-a963-8d115bacfc30\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595652 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-config-data\") pod \"0b68f766-1ba0-4041-a963-8d115bacfc30\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.595702 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-log-httpd\") pod \"0b68f766-1ba0-4041-a963-8d115bacfc30\" (UID: \"0b68f766-1ba0-4041-a963-8d115bacfc30\") " Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.596213 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.596231 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.596240 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca9c6d10-6357-4632-9c0f-ff477e8526f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.596249 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-592f9\" (UniqueName: \"kubernetes.io/projected/ca9c6d10-6357-4632-9c0f-ff477e8526f0-kube-api-access-592f9\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.598666 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c1b33a-6a3f-41d0-9df3-08eb35e89315-kube-api-access-fhlnt" (OuterVolumeSpecName: "kube-api-access-fhlnt") pod "43c1b33a-6a3f-41d0-9df3-08eb35e89315" (UID: "43c1b33a-6a3f-41d0-9df3-08eb35e89315"). InnerVolumeSpecName "kube-api-access-fhlnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.604890 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b68f766-1ba0-4041-a963-8d115bacfc30-kube-api-access-6hdm7" (OuterVolumeSpecName: "kube-api-access-6hdm7") pod "0b68f766-1ba0-4041-a963-8d115bacfc30" (UID: "0b68f766-1ba0-4041-a963-8d115bacfc30"). InnerVolumeSpecName "kube-api-access-6hdm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.609020 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "43c1b33a-6a3f-41d0-9df3-08eb35e89315" (UID: "43c1b33a-6a3f-41d0-9df3-08eb35e89315"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.610258 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b68f766-1ba0-4041-a963-8d115bacfc30" (UID: "0b68f766-1ba0-4041-a963-8d115bacfc30"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.610327 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b68f766-1ba0-4041-a963-8d115bacfc30" (UID: "0b68f766-1ba0-4041-a963-8d115bacfc30"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.614706 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-scripts" (OuterVolumeSpecName: "scripts") pod "0b68f766-1ba0-4041-a963-8d115bacfc30" (UID: "0b68f766-1ba0-4041-a963-8d115bacfc30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.663800 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b68f766-1ba0-4041-a963-8d115bacfc30" (UID: "0b68f766-1ba0-4041-a963-8d115bacfc30"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.666507 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xh8l6"] Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.677027 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-544e-account-create-update-hnhlj"] Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.697575 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhlnt\" (UniqueName: \"kubernetes.io/projected/43c1b33a-6a3f-41d0-9df3-08eb35e89315-kube-api-access-fhlnt\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.697799 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.697836 5049 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.697848 5049 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.697859 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.697869 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hdm7\" (UniqueName: \"kubernetes.io/projected/0b68f766-1ba0-4041-a963-8d115bacfc30-kube-api-access-6hdm7\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.697877 5049 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b68f766-1ba0-4041-a963-8d115bacfc30-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.699251 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43c1b33a-6a3f-41d0-9df3-08eb35e89315" (UID: "43c1b33a-6a3f-41d0-9df3-08eb35e89315"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.699820 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-config" (OuterVolumeSpecName: "config") pod "43c1b33a-6a3f-41d0-9df3-08eb35e89315" (UID: "43c1b33a-6a3f-41d0-9df3-08eb35e89315"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.713942 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b68f766-1ba0-4041-a963-8d115bacfc30" (UID: "0b68f766-1ba0-4041-a963-8d115bacfc30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.717706 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "43c1b33a-6a3f-41d0-9df3-08eb35e89315" (UID: "43c1b33a-6a3f-41d0-9df3-08eb35e89315"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.724530 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-config-data" (OuterVolumeSpecName: "config-data") pod "0b68f766-1ba0-4041-a963-8d115bacfc30" (UID: "0b68f766-1ba0-4041-a963-8d115bacfc30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.803065 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.803087 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.803096 5049 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.803108 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c1b33a-6a3f-41d0-9df3-08eb35e89315-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.803117 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b68f766-1ba0-4041-a963-8d115bacfc30-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.851244 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sqdsx"] Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.896310 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-tnr58"] Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.896933 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xh8l6" event={"ID":"09c43be6-a13b-4447-b9f8-e6aeacd4b2be","Type":"ContainerStarted","Data":"acfe4924544521cef4b2856db6d06459aba68e6640f261338292f385b538555e"} Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.901628 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-80af-account-create-update-9lhmg" event={"ID":"34ab2df7-a5b7-463d-96c6-b2d208031c97","Type":"ContainerStarted","Data":"60e9bf4cb72493a41920e914425ff9573719aaa77a3789fd0dd7269859bc2b48"} Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.905565 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-9lhmg"] Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.912795 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b68f766-1ba0-4041-a963-8d115bacfc30","Type":"ContainerDied","Data":"1afa66230b35142f921633722943f21b796c96d8fe8ae1b26591ebdd0a1d2483"} Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.912852 5049 scope.go:117] "RemoveContainer" containerID="be3176317fe4a09785260b85b11a6aedb8f5aa434ba3039a57f442778bacddf2" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.912994 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.914020 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d95d-account-create-update-ggmht"] Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.920246 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b447db58f-bdtv6" event={"ID":"ca9c6d10-6357-4632-9c0f-ff477e8526f0","Type":"ContainerDied","Data":"c6a3c65c760b7689605cf05f2bd85f94d21d09f609f7509872ea03e672ea43a0"} Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.920269 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b447db58f-bdtv6" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.922644 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sqdsx" event={"ID":"61ffaea7-3bce-404a-9717-0e0e9b49c9d4","Type":"ContainerStarted","Data":"9abad3b8a570a22a5f05fe72c053b537ddcf2a96b66cc0e12d3c1659b74d6059"} Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.924714 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d99c987d8-s77jv" event={"ID":"43c1b33a-6a3f-41d0-9df3-08eb35e89315","Type":"ContainerDied","Data":"0076d4f73b147d4a839ee433c3cbd535c66b734bebda823c63537a255648784d"} Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.924827 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d99c987d8-s77jv" Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.928176 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" event={"ID":"aa66ef85-42ab-42a2-9ed2-3cd9210d962e","Type":"ContainerStarted","Data":"7b5401831fa4bd7eac4796abee3b35fe574ba90871d12c4b60c11bf0c4a32107"} Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.959150 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:44 crc kubenswrapper[5049]: I0127 17:18:44.988069 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.013726 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d99c987d8-s77jv"] Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.023360 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:45 crc kubenswrapper[5049]: E0127 17:18:45.023899 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerName="barbican-worker" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.023923 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerName="barbican-worker" Jan 27 17:18:45 crc kubenswrapper[5049]: E0127 17:18:45.023939 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerName="neutron-httpd" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.023948 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerName="neutron-httpd" Jan 27 17:18:45 crc kubenswrapper[5049]: E0127 17:18:45.023965 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerName="neutron-api" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.023973 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerName="neutron-api" Jan 27 17:18:45 crc kubenswrapper[5049]: E0127 17:18:45.023994 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="ceilometer-notification-agent" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024002 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="ceilometer-notification-agent" Jan 27 17:18:45 crc kubenswrapper[5049]: E0127 17:18:45.024015 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="sg-core" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024023 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="sg-core" Jan 27 17:18:45 crc kubenswrapper[5049]: E0127 17:18:45.024041 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerName="barbican-worker-log" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024050 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerName="barbican-worker-log" Jan 27 17:18:45 crc kubenswrapper[5049]: E0127 17:18:45.024077 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="proxy-httpd" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024086 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="proxy-httpd" Jan 27 17:18:45 crc kubenswrapper[5049]: E0127 17:18:45.024097 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="ceilometer-central-agent" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024104 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="ceilometer-central-agent" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024298 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="ceilometer-notification-agent" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024324 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="ceilometer-central-agent" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024336 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="sg-core" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024344 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" containerName="proxy-httpd" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024356 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerName="barbican-worker-log" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024368 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerName="neutron-api" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024380 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" containerName="neutron-httpd" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024389 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" containerName="barbican-worker" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.024650 5049 scope.go:117] "RemoveContainer" containerID="02693659af0a8f83e6f343ad384ad7898ed03d93a76b5ac9b9e8d3acabfc953c" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.026336 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.029156 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.029182 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.037039 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d99c987d8-s77jv"] Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.048114 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.057356 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5b447db58f-bdtv6"] Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.062639 5049 scope.go:117] "RemoveContainer" containerID="54661e2c07ff58f9fc9388a5b237bae129bcfeaff2a90f47c3ff548a2c5e3364" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.067144 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5b447db58f-bdtv6"] Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.108450 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-log-httpd\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.108502 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.108529 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-config-data\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.108568 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-scripts\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.108614 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkck4\" (UniqueName: \"kubernetes.io/projected/a1155387-125d-46be-a899-0ec8afc1411f-kube-api-access-zkck4\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.108653 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.108817 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-run-httpd\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.174693 5049 scope.go:117] "RemoveContainer" containerID="cc854d20296f557cd4d5d9bb19c6c03d1e1ce6eba46ab19dccf9ef4e6ff34dc3" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.200393 5049 scope.go:117] "RemoveContainer" containerID="ea17395c3b3bc8724181c64f6fb84ecf9ae264353cc2faa255466f7ac04ec368" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.210447 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-scripts\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.210495 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkck4\" (UniqueName: \"kubernetes.io/projected/a1155387-125d-46be-a899-0ec8afc1411f-kube-api-access-zkck4\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.210529 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.210620 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-run-httpd\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.210658 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-log-httpd\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.210688 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.210703 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-config-data\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.214347 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-log-httpd\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.214575 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-run-httpd\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.217566 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.217665 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-config-data\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.217906 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-scripts\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.218994 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.245539 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkck4\" (UniqueName: \"kubernetes.io/projected/a1155387-125d-46be-a899-0ec8afc1411f-kube-api-access-zkck4\") pod \"ceilometer-0\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.268541 5049 scope.go:117] "RemoveContainer" containerID="02c12d734fa1941b4cc9bfcf5f2b4b9a40625cd00b35e0aaf8d32fae4ecdf5c3" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.336897 5049 scope.go:117] "RemoveContainer" containerID="9ade653a6243dab269f655544c3e6b332582ce038ff0bc3d90f5a5dc8b566b5d" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.373806 5049 scope.go:117] "RemoveContainer" containerID="95ebf3fa581f6b2ef306585f39b1a074437c276cccd966b6119dbd09f2d3a5ec" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.455402 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.660498 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b68f766-1ba0-4041-a963-8d115bacfc30" path="/var/lib/kubelet/pods/0b68f766-1ba0-4041-a963-8d115bacfc30/volumes" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.662792 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c1b33a-6a3f-41d0-9df3-08eb35e89315" path="/var/lib/kubelet/pods/43c1b33a-6a3f-41d0-9df3-08eb35e89315/volumes" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.663784 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9c6d10-6357-4632-9c0f-ff477e8526f0" path="/var/lib/kubelet/pods/ca9c6d10-6357-4632-9c0f-ff477e8526f0/volumes" Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.948936 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d95d-account-create-update-ggmht" event={"ID":"dd9b47ed-4021-4981-8975-d8af2c7d80ce","Type":"ContainerStarted","Data":"e5eb2b2fb188212e98c83d5f12d536d1352371cd983ec7f303ef474de54ba07e"} Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.955403 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" event={"ID":"aa66ef85-42ab-42a2-9ed2-3cd9210d962e","Type":"ContainerStarted","Data":"937e562346ae487fb3be55f7c8b72e630d672dcec3184ea0f4d6dfb0b5d0bebd"} Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.957636 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tnr58" event={"ID":"255716e0-246f-4167-a784-df7005bade5d","Type":"ContainerStarted","Data":"78bd9a2ab8321206246df6eadac7aaacd5086d12a959cffbf9b1e22e1bd87b64"} Jan 27 17:18:45 crc kubenswrapper[5049]: I0127 17:18:45.961221 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xh8l6" event={"ID":"09c43be6-a13b-4447-b9f8-e6aeacd4b2be","Type":"ContainerStarted","Data":"4d62fe3f218ab430b02c0a5feb5685b2843febf6591ab36fb4b523d784f6cfe2"} Jan 27 17:18:46 crc kubenswrapper[5049]: I0127 17:18:46.403652 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:46 crc kubenswrapper[5049]: I0127 17:18:46.404065 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:18:46 crc kubenswrapper[5049]: I0127 17:18:46.988291 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-80af-account-create-update-9lhmg" event={"ID":"34ab2df7-a5b7-463d-96c6-b2d208031c97","Type":"ContainerStarted","Data":"b47f5414c1582f21b1da4088380b6b02a451e3a5445b297b8bf6ad208bb2d933"} Jan 27 17:18:46 crc kubenswrapper[5049]: I0127 17:18:46.990626 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d95d-account-create-update-ggmht" event={"ID":"dd9b47ed-4021-4981-8975-d8af2c7d80ce","Type":"ContainerStarted","Data":"cecd0a11e45fbdb4890c4d67eb2840588379ffa4729b0318a1da22165154bf43"} Jan 27 17:18:46 crc kubenswrapper[5049]: I0127 17:18:46.992922 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sqdsx" event={"ID":"61ffaea7-3bce-404a-9717-0e0e9b49c9d4","Type":"ContainerStarted","Data":"f0557be01aeda0b262ae4132263b0d52831fe6b39db78cba0346c927862f56d4"} Jan 27 17:18:46 crc kubenswrapper[5049]: I0127 17:18:46.995435 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tnr58" event={"ID":"255716e0-246f-4167-a784-df7005bade5d","Type":"ContainerStarted","Data":"44858ca344cb02f0f890719655ebff9255be9cb7ce9987f2a0c412b281f23bd6"} Jan 27 17:18:47 crc kubenswrapper[5049]: I0127 17:18:47.021290 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-xh8l6" podStartSLOduration=5.021267133 podStartE2EDuration="5.021267133s" podCreationTimestamp="2026-01-27 17:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:47.0106884 +0000 UTC m=+1302.109661949" watchObservedRunningTime="2026-01-27 17:18:47.021267133 +0000 UTC m=+1302.120240682" Jan 27 17:18:47 crc kubenswrapper[5049]: I0127 17:18:47.037232 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" podStartSLOduration=5.037209159 podStartE2EDuration="5.037209159s" podCreationTimestamp="2026-01-27 17:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:18:47.023443295 +0000 UTC m=+1302.122416884" watchObservedRunningTime="2026-01-27 17:18:47.037209159 +0000 UTC m=+1302.136182708" Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.004600 5049 generic.go:334] "Generic (PLEG): container finished" podID="09c43be6-a13b-4447-b9f8-e6aeacd4b2be" containerID="4d62fe3f218ab430b02c0a5feb5685b2843febf6591ab36fb4b523d784f6cfe2" exitCode=0 Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.004717 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xh8l6" event={"ID":"09c43be6-a13b-4447-b9f8-e6aeacd4b2be","Type":"ContainerDied","Data":"4d62fe3f218ab430b02c0a5feb5685b2843febf6591ab36fb4b523d784f6cfe2"} Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.007196 5049 generic.go:334] "Generic (PLEG): container finished" podID="34ab2df7-a5b7-463d-96c6-b2d208031c97" containerID="b47f5414c1582f21b1da4088380b6b02a451e3a5445b297b8bf6ad208bb2d933" exitCode=0 Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.007277 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-80af-account-create-update-9lhmg" event={"ID":"34ab2df7-a5b7-463d-96c6-b2d208031c97","Type":"ContainerDied","Data":"b47f5414c1582f21b1da4088380b6b02a451e3a5445b297b8bf6ad208bb2d933"} Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.009126 5049 generic.go:334] "Generic (PLEG): container finished" podID="dd9b47ed-4021-4981-8975-d8af2c7d80ce" containerID="cecd0a11e45fbdb4890c4d67eb2840588379ffa4729b0318a1da22165154bf43" exitCode=0 Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.009206 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d95d-account-create-update-ggmht" event={"ID":"dd9b47ed-4021-4981-8975-d8af2c7d80ce","Type":"ContainerDied","Data":"cecd0a11e45fbdb4890c4d67eb2840588379ffa4729b0318a1da22165154bf43"} Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.012786 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"467be3c3-34b2-4cea-8785-bacf5a6a5a39","Type":"ContainerStarted","Data":"d37d30713855c20058ddaa0bf88d078a2270dcf3c57898ea717f889e47119dd9"} Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.016833 5049 generic.go:334] "Generic (PLEG): container finished" podID="61ffaea7-3bce-404a-9717-0e0e9b49c9d4" containerID="f0557be01aeda0b262ae4132263b0d52831fe6b39db78cba0346c927862f56d4" exitCode=0 Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.016909 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sqdsx" event={"ID":"61ffaea7-3bce-404a-9717-0e0e9b49c9d4","Type":"ContainerDied","Data":"f0557be01aeda0b262ae4132263b0d52831fe6b39db78cba0346c927862f56d4"} Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.019058 5049 generic.go:334] "Generic (PLEG): container finished" podID="aa66ef85-42ab-42a2-9ed2-3cd9210d962e" containerID="937e562346ae487fb3be55f7c8b72e630d672dcec3184ea0f4d6dfb0b5d0bebd" exitCode=0 Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.019125 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" event={"ID":"aa66ef85-42ab-42a2-9ed2-3cd9210d962e","Type":"ContainerDied","Data":"937e562346ae487fb3be55f7c8b72e630d672dcec3184ea0f4d6dfb0b5d0bebd"} Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.021110 5049 generic.go:334] "Generic (PLEG): container finished" podID="255716e0-246f-4167-a784-df7005bade5d" containerID="44858ca344cb02f0f890719655ebff9255be9cb7ce9987f2a0c412b281f23bd6" exitCode=0 Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.021158 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tnr58" event={"ID":"255716e0-246f-4167-a784-df7005bade5d","Type":"ContainerDied","Data":"44858ca344cb02f0f890719655ebff9255be9cb7ce9987f2a0c412b281f23bd6"} Jan 27 17:18:48 crc kubenswrapper[5049]: W0127 17:18:48.059019 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1155387_125d_46be_a899_0ec8afc1411f.slice/crio-ef9babfee2b25d52c361535f222e79111a3a7ae41c55aca45e00305d179eee18 WatchSource:0}: Error finding container ef9babfee2b25d52c361535f222e79111a3a7ae41c55aca45e00305d179eee18: Status 404 returned error can't find the container with id ef9babfee2b25d52c361535f222e79111a3a7ae41c55aca45e00305d179eee18 Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.064317 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:48 crc kubenswrapper[5049]: I0127 17:18:48.095233 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.444527725 podStartE2EDuration="18.095215104s" podCreationTimestamp="2026-01-27 17:18:30 +0000 UTC" firstStartedPulling="2026-01-27 17:18:31.897342427 +0000 UTC m=+1286.996315976" lastFinishedPulling="2026-01-27 17:18:47.548029796 +0000 UTC m=+1302.647003355" observedRunningTime="2026-01-27 17:18:48.078790054 +0000 UTC m=+1303.177763603" watchObservedRunningTime="2026-01-27 17:18:48.095215104 +0000 UTC m=+1303.194188663" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.031172 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerStarted","Data":"c37cefb751d64431838d0d639d77094b356c1cbc28d82c80423ed1139e6e6a83"} Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.031532 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerStarted","Data":"ef9babfee2b25d52c361535f222e79111a3a7ae41c55aca45e00305d179eee18"} Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.486281 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.593941 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78rgv\" (UniqueName: \"kubernetes.io/projected/255716e0-246f-4167-a784-df7005bade5d-kube-api-access-78rgv\") pod \"255716e0-246f-4167-a784-df7005bade5d\" (UID: \"255716e0-246f-4167-a784-df7005bade5d\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.594087 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255716e0-246f-4167-a784-df7005bade5d-operator-scripts\") pod \"255716e0-246f-4167-a784-df7005bade5d\" (UID: \"255716e0-246f-4167-a784-df7005bade5d\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.594891 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/255716e0-246f-4167-a784-df7005bade5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "255716e0-246f-4167-a784-df7005bade5d" (UID: "255716e0-246f-4167-a784-df7005bade5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.603083 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255716e0-246f-4167-a784-df7005bade5d-kube-api-access-78rgv" (OuterVolumeSpecName: "kube-api-access-78rgv") pod "255716e0-246f-4167-a784-df7005bade5d" (UID: "255716e0-246f-4167-a784-df7005bade5d"). InnerVolumeSpecName "kube-api-access-78rgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.653759 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.680481 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.695095 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-operator-scripts\") pod \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\" (UID: \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.695241 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9qrs\" (UniqueName: \"kubernetes.io/projected/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-kube-api-access-h9qrs\") pod \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\" (UID: \"61ffaea7-3bce-404a-9717-0e0e9b49c9d4\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.695693 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/255716e0-246f-4167-a784-df7005bade5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.695710 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78rgv\" (UniqueName: \"kubernetes.io/projected/255716e0-246f-4167-a784-df7005bade5d-kube-api-access-78rgv\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.698602 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61ffaea7-3bce-404a-9717-0e0e9b49c9d4" (UID: "61ffaea7-3bce-404a-9717-0e0e9b49c9d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.701096 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-kube-api-access-h9qrs" (OuterVolumeSpecName: "kube-api-access-h9qrs") pod "61ffaea7-3bce-404a-9717-0e0e9b49c9d4" (UID: "61ffaea7-3bce-404a-9717-0e0e9b49c9d4"). InnerVolumeSpecName "kube-api-access-h9qrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.777254 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.791214 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.797058 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldqjj\" (UniqueName: \"kubernetes.io/projected/dd9b47ed-4021-4981-8975-d8af2c7d80ce-kube-api-access-ldqjj\") pod \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\" (UID: \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.797159 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9b47ed-4021-4981-8975-d8af2c7d80ce-operator-scripts\") pod \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\" (UID: \"dd9b47ed-4021-4981-8975-d8af2c7d80ce\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.797593 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9qrs\" (UniqueName: \"kubernetes.io/projected/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-kube-api-access-h9qrs\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.797604 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61ffaea7-3bce-404a-9717-0e0e9b49c9d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.798639 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9b47ed-4021-4981-8975-d8af2c7d80ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd9b47ed-4021-4981-8975-d8af2c7d80ce" (UID: "dd9b47ed-4021-4981-8975-d8af2c7d80ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.830548 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9b47ed-4021-4981-8975-d8af2c7d80ce-kube-api-access-ldqjj" (OuterVolumeSpecName: "kube-api-access-ldqjj") pod "dd9b47ed-4021-4981-8975-d8af2c7d80ce" (UID: "dd9b47ed-4021-4981-8975-d8af2c7d80ce"). InnerVolumeSpecName "kube-api-access-ldqjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.832830 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.902043 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-operator-scripts\") pod \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\" (UID: \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.902467 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2df7-a5b7-463d-96c6-b2d208031c97-operator-scripts\") pod \"34ab2df7-a5b7-463d-96c6-b2d208031c97\" (UID: \"34ab2df7-a5b7-463d-96c6-b2d208031c97\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.902530 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-operator-scripts\") pod \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\" (UID: \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.902644 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa66ef85-42ab-42a2-9ed2-3cd9210d962e" (UID: "aa66ef85-42ab-42a2-9ed2-3cd9210d962e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.902657 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4mbw\" (UniqueName: \"kubernetes.io/projected/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-kube-api-access-b4mbw\") pod \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\" (UID: \"aa66ef85-42ab-42a2-9ed2-3cd9210d962e\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.902915 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kprd8\" (UniqueName: \"kubernetes.io/projected/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-kube-api-access-kprd8\") pod \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\" (UID: \"09c43be6-a13b-4447-b9f8-e6aeacd4b2be\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.902942 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bqrk\" (UniqueName: \"kubernetes.io/projected/34ab2df7-a5b7-463d-96c6-b2d208031c97-kube-api-access-2bqrk\") pod \"34ab2df7-a5b7-463d-96c6-b2d208031c97\" (UID: \"34ab2df7-a5b7-463d-96c6-b2d208031c97\") " Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.903133 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ab2df7-a5b7-463d-96c6-b2d208031c97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "34ab2df7-a5b7-463d-96c6-b2d208031c97" (UID: "34ab2df7-a5b7-463d-96c6-b2d208031c97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.903575 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.903604 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldqjj\" (UniqueName: \"kubernetes.io/projected/dd9b47ed-4021-4981-8975-d8af2c7d80ce-kube-api-access-ldqjj\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.903619 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/34ab2df7-a5b7-463d-96c6-b2d208031c97-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.903630 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd9b47ed-4021-4981-8975-d8af2c7d80ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.904097 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09c43be6-a13b-4447-b9f8-e6aeacd4b2be" (UID: "09c43be6-a13b-4447-b9f8-e6aeacd4b2be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.906915 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-kube-api-access-b4mbw" (OuterVolumeSpecName: "kube-api-access-b4mbw") pod "aa66ef85-42ab-42a2-9ed2-3cd9210d962e" (UID: "aa66ef85-42ab-42a2-9ed2-3cd9210d962e"). InnerVolumeSpecName "kube-api-access-b4mbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.907902 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ab2df7-a5b7-463d-96c6-b2d208031c97-kube-api-access-2bqrk" (OuterVolumeSpecName: "kube-api-access-2bqrk") pod "34ab2df7-a5b7-463d-96c6-b2d208031c97" (UID: "34ab2df7-a5b7-463d-96c6-b2d208031c97"). InnerVolumeSpecName "kube-api-access-2bqrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:49 crc kubenswrapper[5049]: I0127 17:18:49.908637 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-kube-api-access-kprd8" (OuterVolumeSpecName: "kube-api-access-kprd8") pod "09c43be6-a13b-4447-b9f8-e6aeacd4b2be" (UID: "09c43be6-a13b-4447-b9f8-e6aeacd4b2be"). InnerVolumeSpecName "kube-api-access-kprd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.005750 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.005777 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4mbw\" (UniqueName: \"kubernetes.io/projected/aa66ef85-42ab-42a2-9ed2-3cd9210d962e-kube-api-access-b4mbw\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.005790 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kprd8\" (UniqueName: \"kubernetes.io/projected/09c43be6-a13b-4447-b9f8-e6aeacd4b2be-kube-api-access-kprd8\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.005820 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bqrk\" (UniqueName: \"kubernetes.io/projected/34ab2df7-a5b7-463d-96c6-b2d208031c97-kube-api-access-2bqrk\") on node \"crc\" DevicePath \"\"" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.062237 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xh8l6" event={"ID":"09c43be6-a13b-4447-b9f8-e6aeacd4b2be","Type":"ContainerDied","Data":"acfe4924544521cef4b2856db6d06459aba68e6640f261338292f385b538555e"} Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.062279 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acfe4924544521cef4b2856db6d06459aba68e6640f261338292f385b538555e" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.062367 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xh8l6" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.072242 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-80af-account-create-update-9lhmg" event={"ID":"34ab2df7-a5b7-463d-96c6-b2d208031c97","Type":"ContainerDied","Data":"60e9bf4cb72493a41920e914425ff9573719aaa77a3789fd0dd7269859bc2b48"} Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.072261 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-9lhmg" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.072275 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e9bf4cb72493a41920e914425ff9573719aaa77a3789fd0dd7269859bc2b48" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.073813 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d95d-account-create-update-ggmht" event={"ID":"dd9b47ed-4021-4981-8975-d8af2c7d80ce","Type":"ContainerDied","Data":"e5eb2b2fb188212e98c83d5f12d536d1352371cd983ec7f303ef474de54ba07e"} Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.073919 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5eb2b2fb188212e98c83d5f12d536d1352371cd983ec7f303ef474de54ba07e" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.073841 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d95d-account-create-update-ggmht" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.074936 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerStarted","Data":"c422dcf58a12b3e1c2bbfb5a87f0b3c14d5feabb3c8f40c663b069b0a0e59651"} Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.075896 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sqdsx" event={"ID":"61ffaea7-3bce-404a-9717-0e0e9b49c9d4","Type":"ContainerDied","Data":"9abad3b8a570a22a5f05fe72c053b537ddcf2a96b66cc0e12d3c1659b74d6059"} Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.075920 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9abad3b8a570a22a5f05fe72c053b537ddcf2a96b66cc0e12d3c1659b74d6059" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.075958 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sqdsx" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.083355 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" event={"ID":"aa66ef85-42ab-42a2-9ed2-3cd9210d962e","Type":"ContainerDied","Data":"7b5401831fa4bd7eac4796abee3b35fe574ba90871d12c4b60c11bf0c4a32107"} Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.083391 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b5401831fa4bd7eac4796abee3b35fe574ba90871d12c4b60c11bf0c4a32107" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.083455 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-544e-account-create-update-hnhlj" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.087406 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-tnr58" event={"ID":"255716e0-246f-4167-a784-df7005bade5d","Type":"ContainerDied","Data":"78bd9a2ab8321206246df6eadac7aaacd5086d12a959cffbf9b1e22e1bd87b64"} Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.087428 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78bd9a2ab8321206246df6eadac7aaacd5086d12a959cffbf9b1e22e1bd87b64" Jan 27 17:18:50 crc kubenswrapper[5049]: I0127 17:18:50.087479 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-tnr58" Jan 27 17:18:51 crc kubenswrapper[5049]: I0127 17:18:51.002941 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:18:51 crc kubenswrapper[5049]: I0127 17:18:51.100023 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerStarted","Data":"7215db154fca4c485ce5f2aa053df582e36a7f1fecb9e09ebfd15e0e765ca5ec"} Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.127469 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerStarted","Data":"7ae3504deccf9251f861e1b04cad137e5cae30793e3aca67a011fc088b924d3c"} Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.128039 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="ceilometer-central-agent" containerID="cri-o://c37cefb751d64431838d0d639d77094b356c1cbc28d82c80423ed1139e6e6a83" gracePeriod=30 Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.128398 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.128793 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="proxy-httpd" containerID="cri-o://7ae3504deccf9251f861e1b04cad137e5cae30793e3aca67a011fc088b924d3c" gracePeriod=30 Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.128889 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="sg-core" containerID="cri-o://7215db154fca4c485ce5f2aa053df582e36a7f1fecb9e09ebfd15e0e765ca5ec" gracePeriod=30 Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.128933 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="ceilometer-notification-agent" containerID="cri-o://c422dcf58a12b3e1c2bbfb5a87f0b3c14d5feabb3c8f40c663b069b0a0e59651" gracePeriod=30 Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.161634 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.214309177 podStartE2EDuration="9.161593904s" podCreationTimestamp="2026-01-27 17:18:44 +0000 UTC" firstStartedPulling="2026-01-27 17:18:48.06429374 +0000 UTC m=+1303.163267299" lastFinishedPulling="2026-01-27 17:18:52.011578477 +0000 UTC m=+1307.110552026" observedRunningTime="2026-01-27 17:18:53.148276603 +0000 UTC m=+1308.247250192" watchObservedRunningTime="2026-01-27 17:18:53.161593904 +0000 UTC m=+1308.260567463" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.622862 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jkb97"] Jan 27 17:18:53 crc kubenswrapper[5049]: E0127 17:18:53.623213 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ab2df7-a5b7-463d-96c6-b2d208031c97" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623232 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ab2df7-a5b7-463d-96c6-b2d208031c97" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: E0127 17:18:53.623253 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ffaea7-3bce-404a-9717-0e0e9b49c9d4" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623264 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ffaea7-3bce-404a-9717-0e0e9b49c9d4" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: E0127 17:18:53.623288 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c43be6-a13b-4447-b9f8-e6aeacd4b2be" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623294 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c43be6-a13b-4447-b9f8-e6aeacd4b2be" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: E0127 17:18:53.623308 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255716e0-246f-4167-a784-df7005bade5d" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623317 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="255716e0-246f-4167-a784-df7005bade5d" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: E0127 17:18:53.623332 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9b47ed-4021-4981-8975-d8af2c7d80ce" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623340 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9b47ed-4021-4981-8975-d8af2c7d80ce" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: E0127 17:18:53.623351 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa66ef85-42ab-42a2-9ed2-3cd9210d962e" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623357 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa66ef85-42ab-42a2-9ed2-3cd9210d962e" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623505 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ffaea7-3bce-404a-9717-0e0e9b49c9d4" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623522 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa66ef85-42ab-42a2-9ed2-3cd9210d962e" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623531 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c43be6-a13b-4447-b9f8-e6aeacd4b2be" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623541 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ab2df7-a5b7-463d-96c6-b2d208031c97" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623556 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9b47ed-4021-4981-8975-d8af2c7d80ce" containerName="mariadb-account-create-update" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.623563 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="255716e0-246f-4167-a784-df7005bade5d" containerName="mariadb-database-create" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.624212 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.626556 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.626793 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.627352 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v6bh6" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.646927 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jkb97"] Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.667757 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6qjv\" (UniqueName: \"kubernetes.io/projected/54f65153-193c-49dd-91d3-b7eecb30c74b-kube-api-access-t6qjv\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.667823 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.667947 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-scripts\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.667971 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-config-data\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.769856 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-scripts\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.769898 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-config-data\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.769975 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6qjv\" (UniqueName: \"kubernetes.io/projected/54f65153-193c-49dd-91d3-b7eecb30c74b-kube-api-access-t6qjv\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.770000 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.774853 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.775011 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-config-data\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.775315 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-scripts\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:53 crc kubenswrapper[5049]: I0127 17:18:53.787423 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6qjv\" (UniqueName: \"kubernetes.io/projected/54f65153-193c-49dd-91d3-b7eecb30c74b-kube-api-access-t6qjv\") pod \"nova-cell0-conductor-db-sync-jkb97\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:54 crc kubenswrapper[5049]: I0127 17:18:54.027609 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:18:54 crc kubenswrapper[5049]: I0127 17:18:54.144755 5049 generic.go:334] "Generic (PLEG): container finished" podID="a1155387-125d-46be-a899-0ec8afc1411f" containerID="7215db154fca4c485ce5f2aa053df582e36a7f1fecb9e09ebfd15e0e765ca5ec" exitCode=2 Jan 27 17:18:54 crc kubenswrapper[5049]: I0127 17:18:54.145109 5049 generic.go:334] "Generic (PLEG): container finished" podID="a1155387-125d-46be-a899-0ec8afc1411f" containerID="c422dcf58a12b3e1c2bbfb5a87f0b3c14d5feabb3c8f40c663b069b0a0e59651" exitCode=0 Jan 27 17:18:54 crc kubenswrapper[5049]: I0127 17:18:54.145121 5049 generic.go:334] "Generic (PLEG): container finished" podID="a1155387-125d-46be-a899-0ec8afc1411f" containerID="c37cefb751d64431838d0d639d77094b356c1cbc28d82c80423ed1139e6e6a83" exitCode=0 Jan 27 17:18:54 crc kubenswrapper[5049]: I0127 17:18:54.145143 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerDied","Data":"7215db154fca4c485ce5f2aa053df582e36a7f1fecb9e09ebfd15e0e765ca5ec"} Jan 27 17:18:54 crc kubenswrapper[5049]: I0127 17:18:54.145194 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerDied","Data":"c422dcf58a12b3e1c2bbfb5a87f0b3c14d5feabb3c8f40c663b069b0a0e59651"} Jan 27 17:18:54 crc kubenswrapper[5049]: I0127 17:18:54.145210 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerDied","Data":"c37cefb751d64431838d0d639d77094b356c1cbc28d82c80423ed1139e6e6a83"} Jan 27 17:18:54 crc kubenswrapper[5049]: I0127 17:18:54.545321 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jkb97"] Jan 27 17:18:55 crc kubenswrapper[5049]: I0127 17:18:55.168018 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jkb97" event={"ID":"54f65153-193c-49dd-91d3-b7eecb30c74b","Type":"ContainerStarted","Data":"67b5f35e32fb0b4becab0edd176849e8fb2dcd38c2d2dacc8b8637e95e411cd0"} Jan 27 17:19:02 crc kubenswrapper[5049]: I0127 17:19:02.226330 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jkb97" event={"ID":"54f65153-193c-49dd-91d3-b7eecb30c74b","Type":"ContainerStarted","Data":"4e44fbc63bdfe3405cc0c996f34a24eeb2b3df0dad0e5067d6936856e7cf90b2"} Jan 27 17:19:02 crc kubenswrapper[5049]: I0127 17:19:02.244280 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-jkb97" podStartSLOduration=2.32711257 podStartE2EDuration="9.244251754s" podCreationTimestamp="2026-01-27 17:18:53 +0000 UTC" firstStartedPulling="2026-01-27 17:18:54.547253858 +0000 UTC m=+1309.646227407" lastFinishedPulling="2026-01-27 17:19:01.464393042 +0000 UTC m=+1316.563366591" observedRunningTime="2026-01-27 17:19:02.240601239 +0000 UTC m=+1317.339574788" watchObservedRunningTime="2026-01-27 17:19:02.244251754 +0000 UTC m=+1317.343225303" Jan 27 17:19:12 crc kubenswrapper[5049]: I0127 17:19:12.593214 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:19:12 crc kubenswrapper[5049]: I0127 17:19:12.594084 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerName="glance-log" containerID="cri-o://f458b871a2c65ece7609fd14e073bf2f348d244190d8335cdf4a8b6fa65b2442" gracePeriod=30 Jan 27 17:19:12 crc kubenswrapper[5049]: I0127 17:19:12.594190 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerName="glance-httpd" containerID="cri-o://23107b5e5f1b2d4827d68b36eeea40a74e59b5f3403e846ebcd74816a3ab0840" gracePeriod=30 Jan 27 17:19:13 crc kubenswrapper[5049]: I0127 17:19:13.343414 5049 generic.go:334] "Generic (PLEG): container finished" podID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerID="f458b871a2c65ece7609fd14e073bf2f348d244190d8335cdf4a8b6fa65b2442" exitCode=143 Jan 27 17:19:13 crc kubenswrapper[5049]: I0127 17:19:13.343480 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5bc08fc1-cb54-4c3f-888d-89c9ea303a80","Type":"ContainerDied","Data":"f458b871a2c65ece7609fd14e073bf2f348d244190d8335cdf4a8b6fa65b2442"} Jan 27 17:19:14 crc kubenswrapper[5049]: I0127 17:19:14.204741 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:19:14 crc kubenswrapper[5049]: I0127 17:19:14.205312 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerName="glance-log" containerID="cri-o://5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d" gracePeriod=30 Jan 27 17:19:14 crc kubenswrapper[5049]: I0127 17:19:14.205386 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerName="glance-httpd" containerID="cri-o://69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9" gracePeriod=30 Jan 27 17:19:14 crc kubenswrapper[5049]: I0127 17:19:14.353655 5049 generic.go:334] "Generic (PLEG): container finished" podID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerID="5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d" exitCode=143 Jan 27 17:19:14 crc kubenswrapper[5049]: I0127 17:19:14.353707 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce","Type":"ContainerDied","Data":"5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d"} Jan 27 17:19:15 crc kubenswrapper[5049]: I0127 17:19:15.364015 5049 generic.go:334] "Generic (PLEG): container finished" podID="54f65153-193c-49dd-91d3-b7eecb30c74b" containerID="4e44fbc63bdfe3405cc0c996f34a24eeb2b3df0dad0e5067d6936856e7cf90b2" exitCode=0 Jan 27 17:19:15 crc kubenswrapper[5049]: I0127 17:19:15.364092 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jkb97" event={"ID":"54f65153-193c-49dd-91d3-b7eecb30c74b","Type":"ContainerDied","Data":"4e44fbc63bdfe3405cc0c996f34a24eeb2b3df0dad0e5067d6936856e7cf90b2"} Jan 27 17:19:15 crc kubenswrapper[5049]: I0127 17:19:15.538741 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.393965 5049 generic.go:334] "Generic (PLEG): container finished" podID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerID="23107b5e5f1b2d4827d68b36eeea40a74e59b5f3403e846ebcd74816a3ab0840" exitCode=0 Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.394055 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5bc08fc1-cb54-4c3f-888d-89c9ea303a80","Type":"ContainerDied","Data":"23107b5e5f1b2d4827d68b36eeea40a74e59b5f3403e846ebcd74816a3ab0840"} Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.791071 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.900888 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-config-data\") pod \"54f65153-193c-49dd-91d3-b7eecb30c74b\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.901053 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-scripts\") pod \"54f65153-193c-49dd-91d3-b7eecb30c74b\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.901244 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-combined-ca-bundle\") pod \"54f65153-193c-49dd-91d3-b7eecb30c74b\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.901440 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6qjv\" (UniqueName: \"kubernetes.io/projected/54f65153-193c-49dd-91d3-b7eecb30c74b-kube-api-access-t6qjv\") pod \"54f65153-193c-49dd-91d3-b7eecb30c74b\" (UID: \"54f65153-193c-49dd-91d3-b7eecb30c74b\") " Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.906646 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f65153-193c-49dd-91d3-b7eecb30c74b-kube-api-access-t6qjv" (OuterVolumeSpecName: "kube-api-access-t6qjv") pod "54f65153-193c-49dd-91d3-b7eecb30c74b" (UID: "54f65153-193c-49dd-91d3-b7eecb30c74b"). InnerVolumeSpecName "kube-api-access-t6qjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.906787 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-scripts" (OuterVolumeSpecName: "scripts") pod "54f65153-193c-49dd-91d3-b7eecb30c74b" (UID: "54f65153-193c-49dd-91d3-b7eecb30c74b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.928187 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-config-data" (OuterVolumeSpecName: "config-data") pod "54f65153-193c-49dd-91d3-b7eecb30c74b" (UID: "54f65153-193c-49dd-91d3-b7eecb30c74b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:16 crc kubenswrapper[5049]: I0127 17:19:16.938796 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54f65153-193c-49dd-91d3-b7eecb30c74b" (UID: "54f65153-193c-49dd-91d3-b7eecb30c74b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.001294 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.005092 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.005122 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.005133 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f65153-193c-49dd-91d3-b7eecb30c74b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.005143 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6qjv\" (UniqueName: \"kubernetes.io/projected/54f65153-193c-49dd-91d3-b7eecb30c74b-kube-api-access-t6qjv\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.106499 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-logs\") pod \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.106586 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-config-data\") pod \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.106616 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-httpd-run\") pod \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.106775 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-scripts\") pod \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.106851 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kw45\" (UniqueName: \"kubernetes.io/projected/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-kube-api-access-5kw45\") pod \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.106949 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.106990 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-public-tls-certs\") pod \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.107086 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-combined-ca-bundle\") pod \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\" (UID: \"5bc08fc1-cb54-4c3f-888d-89c9ea303a80\") " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.107326 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-logs" (OuterVolumeSpecName: "logs") pod "5bc08fc1-cb54-4c3f-888d-89c9ea303a80" (UID: "5bc08fc1-cb54-4c3f-888d-89c9ea303a80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.107379 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5bc08fc1-cb54-4c3f-888d-89c9ea303a80" (UID: "5bc08fc1-cb54-4c3f-888d-89c9ea303a80"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.107949 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.107972 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.112222 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-kube-api-access-5kw45" (OuterVolumeSpecName: "kube-api-access-5kw45") pod "5bc08fc1-cb54-4c3f-888d-89c9ea303a80" (UID: "5bc08fc1-cb54-4c3f-888d-89c9ea303a80"). InnerVolumeSpecName "kube-api-access-5kw45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.121357 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "5bc08fc1-cb54-4c3f-888d-89c9ea303a80" (UID: "5bc08fc1-cb54-4c3f-888d-89c9ea303a80"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.124928 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-scripts" (OuterVolumeSpecName: "scripts") pod "5bc08fc1-cb54-4c3f-888d-89c9ea303a80" (UID: "5bc08fc1-cb54-4c3f-888d-89c9ea303a80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.159946 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bc08fc1-cb54-4c3f-888d-89c9ea303a80" (UID: "5bc08fc1-cb54-4c3f-888d-89c9ea303a80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.172863 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-config-data" (OuterVolumeSpecName: "config-data") pod "5bc08fc1-cb54-4c3f-888d-89c9ea303a80" (UID: "5bc08fc1-cb54-4c3f-888d-89c9ea303a80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.183294 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5bc08fc1-cb54-4c3f-888d-89c9ea303a80" (UID: "5bc08fc1-cb54-4c3f-888d-89c9ea303a80"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.210280 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.210325 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.210341 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.210353 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.210367 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kw45\" (UniqueName: \"kubernetes.io/projected/5bc08fc1-cb54-4c3f-888d-89c9ea303a80-kube-api-access-5kw45\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.210419 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.241815 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.311853 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.405421 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-jkb97" event={"ID":"54f65153-193c-49dd-91d3-b7eecb30c74b","Type":"ContainerDied","Data":"67b5f35e32fb0b4becab0edd176849e8fb2dcd38c2d2dacc8b8637e95e411cd0"} Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.405472 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67b5f35e32fb0b4becab0edd176849e8fb2dcd38c2d2dacc8b8637e95e411cd0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.405478 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-jkb97" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.408034 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5bc08fc1-cb54-4c3f-888d-89c9ea303a80","Type":"ContainerDied","Data":"7222092dfb0a029a189b9e8c425ce40815848726660d21d395ac5da331ee3327"} Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.408091 5049 scope.go:117] "RemoveContainer" containerID="23107b5e5f1b2d4827d68b36eeea40a74e59b5f3403e846ebcd74816a3ab0840" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.408092 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.475914 5049 scope.go:117] "RemoveContainer" containerID="f458b871a2c65ece7609fd14e073bf2f348d244190d8335cdf4a8b6fa65b2442" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.515626 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.546280 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.578076 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:19:17 crc kubenswrapper[5049]: E0127 17:19:17.578744 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerName="glance-log" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.578759 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerName="glance-log" Jan 27 17:19:17 crc kubenswrapper[5049]: E0127 17:19:17.578794 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerName="glance-httpd" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.578801 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerName="glance-httpd" Jan 27 17:19:17 crc kubenswrapper[5049]: E0127 17:19:17.578921 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f65153-193c-49dd-91d3-b7eecb30c74b" containerName="nova-cell0-conductor-db-sync" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.578936 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f65153-193c-49dd-91d3-b7eecb30c74b" containerName="nova-cell0-conductor-db-sync" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.579281 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f65153-193c-49dd-91d3-b7eecb30c74b" containerName="nova-cell0-conductor-db-sync" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.579303 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerName="glance-log" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.579318 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" containerName="glance-httpd" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.580857 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.581255 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.585288 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.585793 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.592317 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v6bh6" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.593648 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.594282 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.618093 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.642411 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.666429 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bc08fc1-cb54-4c3f-888d-89c9ea303a80" path="/var/lib/kubelet/pods/5bc08fc1-cb54-4c3f-888d-89c9ea303a80/volumes" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721008 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721071 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721136 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-config-data\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721203 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721225 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721268 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-scripts\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721309 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vvhx\" (UniqueName: \"kubernetes.io/projected/59384c20-c0a3-4524-9ddb-407b96e8f882-kube-api-access-7vvhx\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721358 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7ndl\" (UniqueName: \"kubernetes.io/projected/6c4b4464-1c98-412b-96cf-235908a4eaf6-kube-api-access-x7ndl\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721509 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721553 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-logs\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.721607 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823393 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-config-data\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823468 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823492 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823521 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-scripts\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823551 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vvhx\" (UniqueName: \"kubernetes.io/projected/59384c20-c0a3-4524-9ddb-407b96e8f882-kube-api-access-7vvhx\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823575 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7ndl\" (UniqueName: \"kubernetes.io/projected/6c4b4464-1c98-412b-96cf-235908a4eaf6-kube-api-access-x7ndl\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823608 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823631 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-logs\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823653 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823701 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.823730 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.825073 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.826556 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-logs\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.826739 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.829416 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.833393 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.833707 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-scripts\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.845741 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-config-data\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.846766 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.847652 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.848742 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vvhx\" (UniqueName: \"kubernetes.io/projected/59384c20-c0a3-4524-9ddb-407b96e8f882-kube-api-access-7vvhx\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.850216 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7ndl\" (UniqueName: \"kubernetes.io/projected/6c4b4464-1c98-412b-96cf-235908a4eaf6-kube-api-access-x7ndl\") pod \"nova-cell0-conductor-0\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.866989 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.935188 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:19:17 crc kubenswrapper[5049]: I0127 17:19:17.949621 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.057339 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.148853 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67xgs\" (UniqueName: \"kubernetes.io/projected/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-kube-api-access-67xgs\") pod \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.148940 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.149328 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-logs\") pod \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.149393 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-config-data\") pod \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.149573 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-httpd-run\") pod \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.149603 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-combined-ca-bundle\") pod \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.149653 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-internal-tls-certs\") pod \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.149707 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-scripts\") pod \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\" (UID: \"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce\") " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.152057 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-logs" (OuterVolumeSpecName: "logs") pod "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" (UID: "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.155156 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" (UID: "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.156119 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" (UID: "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.157253 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-scripts" (OuterVolumeSpecName: "scripts") pod "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" (UID: "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.161565 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-kube-api-access-67xgs" (OuterVolumeSpecName: "kube-api-access-67xgs") pod "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" (UID: "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce"). InnerVolumeSpecName "kube-api-access-67xgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.188358 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" (UID: "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.225046 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" (UID: "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.236577 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-config-data" (OuterVolumeSpecName: "config-data") pod "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" (UID: "a06b8e7e-7f19-47be-999f-dd2db1f6a2ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.251898 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.251925 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.251936 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.251946 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.251955 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67xgs\" (UniqueName: \"kubernetes.io/projected/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-kube-api-access-67xgs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.251988 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.251999 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.252007 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.269119 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.353433 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.419865 5049 generic.go:334] "Generic (PLEG): container finished" podID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerID="69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9" exitCode=0 Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.419908 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce","Type":"ContainerDied","Data":"69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9"} Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.419919 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.419929 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a06b8e7e-7f19-47be-999f-dd2db1f6a2ce","Type":"ContainerDied","Data":"acd6a38fc81c7978ae3553378ff5774849147dfe71bf8d18d72efffd01dea382"} Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.419946 5049 scope.go:117] "RemoveContainer" containerID="69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.462685 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.465176 5049 scope.go:117] "RemoveContainer" containerID="5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.474935 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.483060 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:19:18 crc kubenswrapper[5049]: E0127 17:19:18.483600 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerName="glance-log" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.483663 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerName="glance-log" Jan 27 17:19:18 crc kubenswrapper[5049]: E0127 17:19:18.483743 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerName="glance-httpd" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.483794 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerName="glance-httpd" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.484026 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerName="glance-httpd" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.484098 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" containerName="glance-log" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.485032 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.488692 5049 scope.go:117] "RemoveContainer" containerID="69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.489059 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.489153 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 17:19:18 crc kubenswrapper[5049]: E0127 17:19:18.489198 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9\": container with ID starting with 69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9 not found: ID does not exist" containerID="69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.489240 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9"} err="failed to get container status \"69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9\": rpc error: code = NotFound desc = could not find container \"69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9\": container with ID starting with 69664f7fefda5e8dfe01261ba6d1025f27132ff8dcf82bf85b12d1ee671ff5f9 not found: ID does not exist" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.489263 5049 scope.go:117] "RemoveContainer" containerID="5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d" Jan 27 17:19:18 crc kubenswrapper[5049]: E0127 17:19:18.491161 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d\": container with ID starting with 5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d not found: ID does not exist" containerID="5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.491192 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d"} err="failed to get container status \"5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d\": rpc error: code = NotFound desc = could not find container \"5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d\": container with ID starting with 5b8ee3a3cb2a7b7a82cbef160c7edc02a36cb9de9388e94128e34611a436252d not found: ID does not exist" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.503511 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.556295 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.558181 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjs49\" (UniqueName: \"kubernetes.io/projected/d89c9402-b4c3-4180-8a61-9e63497ebb66-kube-api-access-wjs49\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.558357 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.558459 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.558732 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.558846 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.558918 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.558980 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-logs\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.563957 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.585249 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.662504 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.662537 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.662557 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-logs\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.662600 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.662625 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjs49\" (UniqueName: \"kubernetes.io/projected/d89c9402-b4c3-4180-8a61-9e63497ebb66-kube-api-access-wjs49\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.662689 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.662715 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.662741 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.663488 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.664197 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.665217 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-logs\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.672386 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.673025 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.678458 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.682559 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.703551 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjs49\" (UniqueName: \"kubernetes.io/projected/d89c9402-b4c3-4180-8a61-9e63497ebb66-kube-api-access-wjs49\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.716626 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " pod="openstack/glance-default-internal-api-0" Jan 27 17:19:18 crc kubenswrapper[5049]: I0127 17:19:18.807012 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.363216 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.436235 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d89c9402-b4c3-4180-8a61-9e63497ebb66","Type":"ContainerStarted","Data":"3795aa7b39ed977e0043cb4f820368620371658b97c32d3b6d2086d89acb44e8"} Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.445449 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59384c20-c0a3-4524-9ddb-407b96e8f882","Type":"ContainerStarted","Data":"2cade812917319aaec34ab2b32477c1d71dca9c03ae47024b7ad8adb5f1b00d0"} Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.445498 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59384c20-c0a3-4524-9ddb-407b96e8f882","Type":"ContainerStarted","Data":"b52d4e3eed508ac552122ca55a3dd4a2f8e5b9b1845e3a15a51086f1ba5ef723"} Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.447537 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6c4b4464-1c98-412b-96cf-235908a4eaf6","Type":"ContainerStarted","Data":"7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba"} Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.447582 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6c4b4464-1c98-412b-96cf-235908a4eaf6","Type":"ContainerStarted","Data":"804ee76150fc11e3d13be331c997a6674afa1c433ee76dcbd92da93c91445dfd"} Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.447733 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.465371 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.465352373 podStartE2EDuration="2.465352373s" podCreationTimestamp="2026-01-27 17:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:19.459562997 +0000 UTC m=+1334.558536556" watchObservedRunningTime="2026-01-27 17:19:19.465352373 +0000 UTC m=+1334.564325922" Jan 27 17:19:19 crc kubenswrapper[5049]: I0127 17:19:19.669889 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06b8e7e-7f19-47be-999f-dd2db1f6a2ce" path="/var/lib/kubelet/pods/a06b8e7e-7f19-47be-999f-dd2db1f6a2ce/volumes" Jan 27 17:19:20 crc kubenswrapper[5049]: I0127 17:19:20.458941 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59384c20-c0a3-4524-9ddb-407b96e8f882","Type":"ContainerStarted","Data":"d944265d72bf3afb4b6b73f0c7c83289738cd7f4ed8517272ac5b673ffa17c8f"} Jan 27 17:19:20 crc kubenswrapper[5049]: I0127 17:19:20.460815 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d89c9402-b4c3-4180-8a61-9e63497ebb66","Type":"ContainerStarted","Data":"43c820e42c8a751a91420a3b6d5e21201fc9f6a6613e57796cab346cad30e3d9"} Jan 27 17:19:20 crc kubenswrapper[5049]: I0127 17:19:20.478092 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.478073883 podStartE2EDuration="3.478073883s" podCreationTimestamp="2026-01-27 17:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:20.476042505 +0000 UTC m=+1335.575016074" watchObservedRunningTime="2026-01-27 17:19:20.478073883 +0000 UTC m=+1335.577047432" Jan 27 17:19:21 crc kubenswrapper[5049]: I0127 17:19:21.472278 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d89c9402-b4c3-4180-8a61-9e63497ebb66","Type":"ContainerStarted","Data":"5054a75f289d476269fdd4cec1f526a79442c1e4b02d766eec63bd20c154c9f8"} Jan 27 17:19:21 crc kubenswrapper[5049]: I0127 17:19:21.495945 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.49592773 podStartE2EDuration="3.49592773s" podCreationTimestamp="2026-01-27 17:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:21.491061641 +0000 UTC m=+1336.590035200" watchObservedRunningTime="2026-01-27 17:19:21.49592773 +0000 UTC m=+1336.594901289" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.509168 5049 generic.go:334] "Generic (PLEG): container finished" podID="a1155387-125d-46be-a899-0ec8afc1411f" containerID="7ae3504deccf9251f861e1b04cad137e5cae30793e3aca67a011fc088b924d3c" exitCode=137 Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.509268 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerDied","Data":"7ae3504deccf9251f861e1b04cad137e5cae30793e3aca67a011fc088b924d3c"} Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.509798 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a1155387-125d-46be-a899-0ec8afc1411f","Type":"ContainerDied","Data":"ef9babfee2b25d52c361535f222e79111a3a7ae41c55aca45e00305d179eee18"} Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.509836 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef9babfee2b25d52c361535f222e79111a3a7ae41c55aca45e00305d179eee18" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.586998 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.752717 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-scripts\") pod \"a1155387-125d-46be-a899-0ec8afc1411f\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.752860 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-run-httpd\") pod \"a1155387-125d-46be-a899-0ec8afc1411f\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.752941 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkck4\" (UniqueName: \"kubernetes.io/projected/a1155387-125d-46be-a899-0ec8afc1411f-kube-api-access-zkck4\") pod \"a1155387-125d-46be-a899-0ec8afc1411f\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.752978 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-sg-core-conf-yaml\") pod \"a1155387-125d-46be-a899-0ec8afc1411f\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.753032 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-config-data\") pod \"a1155387-125d-46be-a899-0ec8afc1411f\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.753102 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-combined-ca-bundle\") pod \"a1155387-125d-46be-a899-0ec8afc1411f\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.753244 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-log-httpd\") pod \"a1155387-125d-46be-a899-0ec8afc1411f\" (UID: \"a1155387-125d-46be-a899-0ec8afc1411f\") " Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.753429 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a1155387-125d-46be-a899-0ec8afc1411f" (UID: "a1155387-125d-46be-a899-0ec8afc1411f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.754137 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a1155387-125d-46be-a899-0ec8afc1411f" (UID: "a1155387-125d-46be-a899-0ec8afc1411f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.754245 5049 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.761339 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-scripts" (OuterVolumeSpecName: "scripts") pod "a1155387-125d-46be-a899-0ec8afc1411f" (UID: "a1155387-125d-46be-a899-0ec8afc1411f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.761424 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1155387-125d-46be-a899-0ec8afc1411f-kube-api-access-zkck4" (OuterVolumeSpecName: "kube-api-access-zkck4") pod "a1155387-125d-46be-a899-0ec8afc1411f" (UID: "a1155387-125d-46be-a899-0ec8afc1411f"). InnerVolumeSpecName "kube-api-access-zkck4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.785806 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a1155387-125d-46be-a899-0ec8afc1411f" (UID: "a1155387-125d-46be-a899-0ec8afc1411f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.855949 5049 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a1155387-125d-46be-a899-0ec8afc1411f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.856289 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.856299 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkck4\" (UniqueName: \"kubernetes.io/projected/a1155387-125d-46be-a899-0ec8afc1411f-kube-api-access-zkck4\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.856311 5049 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.872645 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1155387-125d-46be-a899-0ec8afc1411f" (UID: "a1155387-125d-46be-a899-0ec8afc1411f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.877271 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-config-data" (OuterVolumeSpecName: "config-data") pod "a1155387-125d-46be-a899-0ec8afc1411f" (UID: "a1155387-125d-46be-a899-0ec8afc1411f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.958325 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:23 crc kubenswrapper[5049]: I0127 17:19:23.958354 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1155387-125d-46be-a899-0ec8afc1411f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.523016 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.577760 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.592884 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.603273 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:19:24 crc kubenswrapper[5049]: E0127 17:19:24.603792 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="sg-core" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.603815 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="sg-core" Jan 27 17:19:24 crc kubenswrapper[5049]: E0127 17:19:24.603836 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="ceilometer-central-agent" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.603845 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="ceilometer-central-agent" Jan 27 17:19:24 crc kubenswrapper[5049]: E0127 17:19:24.603863 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="proxy-httpd" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.603871 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="proxy-httpd" Jan 27 17:19:24 crc kubenswrapper[5049]: E0127 17:19:24.603886 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="ceilometer-notification-agent" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.603894 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="ceilometer-notification-agent" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.604113 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="sg-core" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.604142 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="ceilometer-central-agent" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.604155 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="ceilometer-notification-agent" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.604172 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1155387-125d-46be-a899-0ec8afc1411f" containerName="proxy-httpd" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.606116 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.609307 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.617466 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.627029 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.775470 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-run-httpd\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.775659 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-log-httpd\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.775788 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-scripts\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.775886 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ngrx\" (UniqueName: \"kubernetes.io/projected/106b6683-cac6-4291-b3fd-73259bc511c3-kube-api-access-2ngrx\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.776168 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.776281 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-config-data\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.776521 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.878853 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.878998 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-run-httpd\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.879047 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-log-httpd\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.879109 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-scripts\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.879175 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ngrx\" (UniqueName: \"kubernetes.io/projected/106b6683-cac6-4291-b3fd-73259bc511c3-kube-api-access-2ngrx\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.879251 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.879500 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-config-data\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.880647 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-run-httpd\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.880902 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-log-httpd\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.890944 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.892725 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.893303 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-scripts\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.894239 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-config-data\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.903069 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ngrx\" (UniqueName: \"kubernetes.io/projected/106b6683-cac6-4291-b3fd-73259bc511c3-kube-api-access-2ngrx\") pod \"ceilometer-0\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " pod="openstack/ceilometer-0" Jan 27 17:19:24 crc kubenswrapper[5049]: I0127 17:19:24.947361 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:19:25 crc kubenswrapper[5049]: I0127 17:19:25.426326 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:19:25 crc kubenswrapper[5049]: W0127 17:19:25.454627 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod106b6683_cac6_4291_b3fd_73259bc511c3.slice/crio-6fd5cf1fab1449b7439f87927af183378b75984a985cb5e06e6e7d8945f58a8e WatchSource:0}: Error finding container 6fd5cf1fab1449b7439f87927af183378b75984a985cb5e06e6e7d8945f58a8e: Status 404 returned error can't find the container with id 6fd5cf1fab1449b7439f87927af183378b75984a985cb5e06e6e7d8945f58a8e Jan 27 17:19:25 crc kubenswrapper[5049]: I0127 17:19:25.532998 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerStarted","Data":"6fd5cf1fab1449b7439f87927af183378b75984a985cb5e06e6e7d8945f58a8e"} Jan 27 17:19:25 crc kubenswrapper[5049]: I0127 17:19:25.665125 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1155387-125d-46be-a899-0ec8afc1411f" path="/var/lib/kubelet/pods/a1155387-125d-46be-a899-0ec8afc1411f/volumes" Jan 27 17:19:27 crc kubenswrapper[5049]: I0127 17:19:27.555289 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerStarted","Data":"0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72"} Jan 27 17:19:27 crc kubenswrapper[5049]: I0127 17:19:27.936352 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 17:19:27 crc kubenswrapper[5049]: I0127 17:19:27.937170 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 17:19:27 crc kubenswrapper[5049]: I0127 17:19:27.975692 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 17:19:27 crc kubenswrapper[5049]: I0127 17:19:27.993402 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.063692 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.548000 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4llcp"] Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.549888 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.552303 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.552704 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.559861 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4llcp"] Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.569355 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerStarted","Data":"3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8"} Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.569403 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerStarted","Data":"ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830"} Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.569811 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.570206 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.650943 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjwn\" (UniqueName: \"kubernetes.io/projected/d22525db-6f4e-458d-83c7-c27f295e8363-kube-api-access-bfjwn\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.651010 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.651479 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-config-data\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.652114 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-scripts\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.732031 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.733460 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.736632 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.744247 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.754168 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-scripts\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.754460 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjwn\" (UniqueName: \"kubernetes.io/projected/d22525db-6f4e-458d-83c7-c27f295e8363-kube-api-access-bfjwn\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.754559 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.754726 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-config-data\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.759914 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-scripts\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.759931 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-config-data\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.760209 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.792145 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.793311 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.797655 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.809152 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.809428 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.809396 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfjwn\" (UniqueName: \"kubernetes.io/projected/d22525db-6f4e-458d-83c7-c27f295e8363-kube-api-access-bfjwn\") pod \"nova-cell0-cell-mapping-4llcp\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.834777 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.867409 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.869943 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.869989 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.870020 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-config-data\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.870042 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87cv\" (UniqueName: \"kubernetes.io/projected/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-kube-api-access-p87cv\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.870078 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljls7\" (UniqueName: \"kubernetes.io/projected/9dfdec56-8598-48c7-9266-f6b9733e0355-kube-api-access-ljls7\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.870122 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.870154 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-logs\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.877769 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.942837 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.973567 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.975017 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.979363 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.979414 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-logs\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.979525 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.979564 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.979588 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-config-data\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.979602 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87cv\" (UniqueName: \"kubernetes.io/projected/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-kube-api-access-p87cv\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.979622 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljls7\" (UniqueName: \"kubernetes.io/projected/9dfdec56-8598-48c7-9266-f6b9733e0355-kube-api-access-ljls7\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.987776 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.987978 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.988519 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-logs\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.998662 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:28 crc kubenswrapper[5049]: I0127 17:19:28.998908 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.000737 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.012888 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.023110 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljls7\" (UniqueName: \"kubernetes.io/projected/9dfdec56-8598-48c7-9266-f6b9733e0355-kube-api-access-ljls7\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.026749 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87cv\" (UniqueName: \"kubernetes.io/projected/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-kube-api-access-p87cv\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.029392 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.030246 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.030391 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-config-data\") pod \"nova-api-0\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " pod="openstack/nova-api-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.050174 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.074299 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.080757 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48k47\" (UniqueName: \"kubernetes.io/projected/a9cc40c3-d1e6-4a12-912f-be0f724ca019-kube-api-access-48k47\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.080829 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9cc40c3-d1e6-4a12-912f-be0f724ca019-logs\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.080851 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.080903 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.080933 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-config-data\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.080956 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvt4\" (UniqueName: \"kubernetes.io/projected/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-kube-api-access-7mvt4\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.080977 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.114715 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6bs8c"] Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.116199 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.123590 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6bs8c"] Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182474 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7bj\" (UniqueName: \"kubernetes.io/projected/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-kube-api-access-9q7bj\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182775 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182814 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-config-data\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182832 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182853 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvt4\" (UniqueName: \"kubernetes.io/projected/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-kube-api-access-7mvt4\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182878 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182925 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182952 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182982 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48k47\" (UniqueName: \"kubernetes.io/projected/a9cc40c3-d1e6-4a12-912f-be0f724ca019-kube-api-access-48k47\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.182999 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.183030 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9cc40c3-d1e6-4a12-912f-be0f724ca019-logs\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.183049 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.183065 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-config\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.187415 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9cc40c3-d1e6-4a12-912f-be0f724ca019-logs\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.190825 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-config-data\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.191768 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.193005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.198119 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.216488 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvt4\" (UniqueName: \"kubernetes.io/projected/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-kube-api-access-7mvt4\") pod \"nova-scheduler-0\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.220225 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48k47\" (UniqueName: \"kubernetes.io/projected/a9cc40c3-d1e6-4a12-912f-be0f724ca019-kube-api-access-48k47\") pod \"nova-metadata-0\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.278257 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.287478 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.287534 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.287572 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-config\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.287608 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q7bj\" (UniqueName: \"kubernetes.io/projected/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-kube-api-access-9q7bj\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.287651 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.287845 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.289011 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.289703 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-config\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.289872 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.289971 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.290394 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-svc\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.313396 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q7bj\" (UniqueName: \"kubernetes.io/projected/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-kube-api-access-9q7bj\") pod \"dnsmasq-dns-757b4f8459-6bs8c\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.313830 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.344433 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.442153 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.486024 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.569783 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4llcp"] Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.582235 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486","Type":"ContainerStarted","Data":"96bde3089afdf6f78db7413a30d5baca2af24db54e37202934d026e1752f7297"} Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.582275 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.583122 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.858488 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d89x5"] Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.859994 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.863598 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.863878 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.921788 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d89x5"] Jan 27 17:19:29 crc kubenswrapper[5049]: I0127 17:19:29.955312 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:19:29 crc kubenswrapper[5049]: W0127 17:19:29.963074 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dfdec56_8598_48c7_9266_f6b9733e0355.slice/crio-2fe307fff764794627a24036097cbc1648c4205d84c96a593dab6d35b55fdcc6 WatchSource:0}: Error finding container 2fe307fff764794627a24036097cbc1648c4205d84c96a593dab6d35b55fdcc6: Status 404 returned error can't find the container with id 2fe307fff764794627a24036097cbc1648c4205d84c96a593dab6d35b55fdcc6 Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.020428 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-config-data\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.020479 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65p8p\" (UniqueName: \"kubernetes.io/projected/4e27cd2f-9407-4444-9914-9892b1e41d13-kube-api-access-65p8p\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.020527 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-scripts\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.020609 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.100215 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.122858 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-config-data\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.122923 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65p8p\" (UniqueName: \"kubernetes.io/projected/4e27cd2f-9407-4444-9914-9892b1e41d13-kube-api-access-65p8p\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.122986 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-scripts\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.123101 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.129890 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-scripts\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.132082 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-config-data\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.133414 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.141008 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65p8p\" (UniqueName: \"kubernetes.io/projected/4e27cd2f-9407-4444-9914-9892b1e41d13-kube-api-access-65p8p\") pod \"nova-cell1-conductor-db-sync-d89x5\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.209382 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6bs8c"] Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.209738 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.219305 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.615903 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerStarted","Data":"59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0"} Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.618117 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.621494 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a","Type":"ContainerStarted","Data":"46a82ccf749386b64944e5ffa3ee5dd06c4536d0378f16645790ddc28f289f6b"} Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.624478 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" event={"ID":"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5","Type":"ContainerStarted","Data":"9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776"} Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.624507 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" event={"ID":"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5","Type":"ContainerStarted","Data":"29c8f428c2d7e58e7a24d8a188daa6a28a54d6e53fa13e04afd8878c51e5eaa1"} Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.667244 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4llcp" event={"ID":"d22525db-6f4e-458d-83c7-c27f295e8363","Type":"ContainerStarted","Data":"87e630809570d66b82043e72d7b0c73f26f019253b501e0c1c593168656b633d"} Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.667289 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4llcp" event={"ID":"d22525db-6f4e-458d-83c7-c27f295e8363","Type":"ContainerStarted","Data":"b2048a223142b4b8a58cafa3c5a7b27be1f9e40f978487a9d29a85208b29b293"} Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.667487 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.007021251 podStartE2EDuration="6.667464911s" podCreationTimestamp="2026-01-27 17:19:24 +0000 UTC" firstStartedPulling="2026-01-27 17:19:25.458347561 +0000 UTC m=+1340.557321110" lastFinishedPulling="2026-01-27 17:19:30.118791221 +0000 UTC m=+1345.217764770" observedRunningTime="2026-01-27 17:19:30.638713029 +0000 UTC m=+1345.737686588" watchObservedRunningTime="2026-01-27 17:19:30.667464911 +0000 UTC m=+1345.766438460" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.681652 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9cc40c3-d1e6-4a12-912f-be0f724ca019","Type":"ContainerStarted","Data":"1388776fe1f03db1ecd52b8baed32da23ddfe3e9fd169e5d70fcd78d79ea482b"} Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.693794 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.693829 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.697739 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9dfdec56-8598-48c7-9266-f6b9733e0355","Type":"ContainerStarted","Data":"2fe307fff764794627a24036097cbc1648c4205d84c96a593dab6d35b55fdcc6"} Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.722294 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4llcp" podStartSLOduration=2.722278149 podStartE2EDuration="2.722278149s" podCreationTimestamp="2026-01-27 17:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:30.703494632 +0000 UTC m=+1345.802468171" watchObservedRunningTime="2026-01-27 17:19:30.722278149 +0000 UTC m=+1345.821251698" Jan 27 17:19:30 crc kubenswrapper[5049]: I0127 17:19:30.764569 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d89x5"] Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.727636 5049 generic.go:334] "Generic (PLEG): container finished" podID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" containerID="9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776" exitCode=0 Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.732133 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" event={"ID":"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5","Type":"ContainerDied","Data":"9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776"} Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.732161 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" event={"ID":"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5","Type":"ContainerStarted","Data":"00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a"} Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.733188 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.746515 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.746546 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.747860 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d89x5" event={"ID":"4e27cd2f-9407-4444-9914-9892b1e41d13","Type":"ContainerStarted","Data":"563de14929d078dd19bf2cb77b128291a6d18e7e9090227946c7b7340017db70"} Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.747887 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d89x5" event={"ID":"4e27cd2f-9407-4444-9914-9892b1e41d13","Type":"ContainerStarted","Data":"e258f2a80c9c7cbc4063e55334355d6b1ec2d82dfd23bf3d2aa4efcec911f3d6"} Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.764897 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" podStartSLOduration=2.764880324 podStartE2EDuration="2.764880324s" podCreationTimestamp="2026-01-27 17:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:31.752708546 +0000 UTC m=+1346.851682095" watchObservedRunningTime="2026-01-27 17:19:31.764880324 +0000 UTC m=+1346.863853873" Jan 27 17:19:31 crc kubenswrapper[5049]: I0127 17:19:31.787040 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-d89x5" podStartSLOduration=2.787018387 podStartE2EDuration="2.787018387s" podCreationTimestamp="2026-01-27 17:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:31.782152548 +0000 UTC m=+1346.881126087" watchObservedRunningTime="2026-01-27 17:19:31.787018387 +0000 UTC m=+1346.885991936" Jan 27 17:19:32 crc kubenswrapper[5049]: I0127 17:19:32.084898 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 17:19:32 crc kubenswrapper[5049]: I0127 17:19:32.085034 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 17:19:32 crc kubenswrapper[5049]: I0127 17:19:32.091165 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 17:19:32 crc kubenswrapper[5049]: I0127 17:19:32.363519 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:32 crc kubenswrapper[5049]: I0127 17:19:32.520905 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:32 crc kubenswrapper[5049]: I0127 17:19:32.556136 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:19:32 crc kubenswrapper[5049]: I0127 17:19:32.771405 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 17:19:33 crc kubenswrapper[5049]: I0127 17:19:33.299715 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 17:19:35 crc kubenswrapper[5049]: I0127 17:19:35.806527 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a","Type":"ContainerStarted","Data":"64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94"} Jan 27 17:19:35 crc kubenswrapper[5049]: I0127 17:19:35.811987 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486","Type":"ContainerStarted","Data":"f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c"} Jan 27 17:19:35 crc kubenswrapper[5049]: I0127 17:19:35.814942 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9cc40c3-d1e6-4a12-912f-be0f724ca019","Type":"ContainerStarted","Data":"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac"} Jan 27 17:19:35 crc kubenswrapper[5049]: I0127 17:19:35.817305 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9dfdec56-8598-48c7-9266-f6b9733e0355","Type":"ContainerStarted","Data":"af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b"} Jan 27 17:19:35 crc kubenswrapper[5049]: I0127 17:19:35.817436 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="9dfdec56-8598-48c7-9266-f6b9733e0355" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b" gracePeriod=30 Jan 27 17:19:35 crc kubenswrapper[5049]: I0127 17:19:35.830516 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.798611871 podStartE2EDuration="7.830500855s" podCreationTimestamp="2026-01-27 17:19:28 +0000 UTC" firstStartedPulling="2026-01-27 17:19:30.232916265 +0000 UTC m=+1345.331889824" lastFinishedPulling="2026-01-27 17:19:35.264805268 +0000 UTC m=+1350.363778808" observedRunningTime="2026-01-27 17:19:35.825249695 +0000 UTC m=+1350.924223244" watchObservedRunningTime="2026-01-27 17:19:35.830500855 +0000 UTC m=+1350.929474404" Jan 27 17:19:35 crc kubenswrapper[5049]: I0127 17:19:35.848087 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.541778407 podStartE2EDuration="7.848047857s" podCreationTimestamp="2026-01-27 17:19:28 +0000 UTC" firstStartedPulling="2026-01-27 17:19:29.964556671 +0000 UTC m=+1345.063530220" lastFinishedPulling="2026-01-27 17:19:35.270826121 +0000 UTC m=+1350.369799670" observedRunningTime="2026-01-27 17:19:35.84253973 +0000 UTC m=+1350.941513279" watchObservedRunningTime="2026-01-27 17:19:35.848047857 +0000 UTC m=+1350.947021406" Jan 27 17:19:36 crc kubenswrapper[5049]: I0127 17:19:36.863053 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9cc40c3-d1e6-4a12-912f-be0f724ca019","Type":"ContainerStarted","Data":"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2"} Jan 27 17:19:36 crc kubenswrapper[5049]: I0127 17:19:36.865442 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerName="nova-metadata-log" containerID="cri-o://61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac" gracePeriod=30 Jan 27 17:19:36 crc kubenswrapper[5049]: I0127 17:19:36.865484 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerName="nova-metadata-metadata" containerID="cri-o://52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2" gracePeriod=30 Jan 27 17:19:36 crc kubenswrapper[5049]: I0127 17:19:36.868192 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486","Type":"ContainerStarted","Data":"642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd"} Jan 27 17:19:36 crc kubenswrapper[5049]: I0127 17:19:36.891517 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.753027664 podStartE2EDuration="8.891495626s" podCreationTimestamp="2026-01-27 17:19:28 +0000 UTC" firstStartedPulling="2026-01-27 17:19:30.123234588 +0000 UTC m=+1345.222208137" lastFinishedPulling="2026-01-27 17:19:35.26170255 +0000 UTC m=+1350.360676099" observedRunningTime="2026-01-27 17:19:36.887137882 +0000 UTC m=+1351.986111451" watchObservedRunningTime="2026-01-27 17:19:36.891495626 +0000 UTC m=+1351.990469195" Jan 27 17:19:36 crc kubenswrapper[5049]: I0127 17:19:36.912148 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.187502563 podStartE2EDuration="8.912126816s" podCreationTimestamp="2026-01-27 17:19:28 +0000 UTC" firstStartedPulling="2026-01-27 17:19:29.541789842 +0000 UTC m=+1344.640763391" lastFinishedPulling="2026-01-27 17:19:35.266414075 +0000 UTC m=+1350.365387644" observedRunningTime="2026-01-27 17:19:36.908930905 +0000 UTC m=+1352.007904464" watchObservedRunningTime="2026-01-27 17:19:36.912126816 +0000 UTC m=+1352.011100385" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.492051 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.582855 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48k47\" (UniqueName: \"kubernetes.io/projected/a9cc40c3-d1e6-4a12-912f-be0f724ca019-kube-api-access-48k47\") pod \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.583012 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9cc40c3-d1e6-4a12-912f-be0f724ca019-logs\") pod \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.583040 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-config-data\") pod \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.583093 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-combined-ca-bundle\") pod \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\" (UID: \"a9cc40c3-d1e6-4a12-912f-be0f724ca019\") " Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.584141 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9cc40c3-d1e6-4a12-912f-be0f724ca019-logs" (OuterVolumeSpecName: "logs") pod "a9cc40c3-d1e6-4a12-912f-be0f724ca019" (UID: "a9cc40c3-d1e6-4a12-912f-be0f724ca019"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.589072 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9cc40c3-d1e6-4a12-912f-be0f724ca019-kube-api-access-48k47" (OuterVolumeSpecName: "kube-api-access-48k47") pod "a9cc40c3-d1e6-4a12-912f-be0f724ca019" (UID: "a9cc40c3-d1e6-4a12-912f-be0f724ca019"). InnerVolumeSpecName "kube-api-access-48k47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.616262 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9cc40c3-d1e6-4a12-912f-be0f724ca019" (UID: "a9cc40c3-d1e6-4a12-912f-be0f724ca019"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.624798 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-config-data" (OuterVolumeSpecName: "config-data") pod "a9cc40c3-d1e6-4a12-912f-be0f724ca019" (UID: "a9cc40c3-d1e6-4a12-912f-be0f724ca019"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.685817 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48k47\" (UniqueName: \"kubernetes.io/projected/a9cc40c3-d1e6-4a12-912f-be0f724ca019-kube-api-access-48k47\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.685858 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9cc40c3-d1e6-4a12-912f-be0f724ca019-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.685871 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.685903 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9cc40c3-d1e6-4a12-912f-be0f724ca019-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.878601 5049 generic.go:334] "Generic (PLEG): container finished" podID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerID="52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2" exitCode=0 Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.878648 5049 generic.go:334] "Generic (PLEG): container finished" podID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerID="61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac" exitCode=143 Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.878666 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.878738 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9cc40c3-d1e6-4a12-912f-be0f724ca019","Type":"ContainerDied","Data":"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2"} Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.878791 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9cc40c3-d1e6-4a12-912f-be0f724ca019","Type":"ContainerDied","Data":"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac"} Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.878819 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9cc40c3-d1e6-4a12-912f-be0f724ca019","Type":"ContainerDied","Data":"1388776fe1f03db1ecd52b8baed32da23ddfe3e9fd169e5d70fcd78d79ea482b"} Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.878855 5049 scope.go:117] "RemoveContainer" containerID="52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.902771 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.912557 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.919557 5049 scope.go:117] "RemoveContainer" containerID="61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.942632 5049 scope.go:117] "RemoveContainer" containerID="52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2" Jan 27 17:19:37 crc kubenswrapper[5049]: E0127 17:19:37.944372 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2\": container with ID starting with 52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2 not found: ID does not exist" containerID="52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.944437 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2"} err="failed to get container status \"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2\": rpc error: code = NotFound desc = could not find container \"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2\": container with ID starting with 52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2 not found: ID does not exist" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.944485 5049 scope.go:117] "RemoveContainer" containerID="61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac" Jan 27 17:19:37 crc kubenswrapper[5049]: E0127 17:19:37.944957 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac\": container with ID starting with 61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac not found: ID does not exist" containerID="61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.944989 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac"} err="failed to get container status \"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac\": rpc error: code = NotFound desc = could not find container \"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac\": container with ID starting with 61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac not found: ID does not exist" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.945011 5049 scope.go:117] "RemoveContainer" containerID="52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.945548 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2"} err="failed to get container status \"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2\": rpc error: code = NotFound desc = could not find container \"52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2\": container with ID starting with 52533472728ccb45f633281f22e2d44780e23c40864e97788f70d2f5dd7c1bf2 not found: ID does not exist" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.945592 5049 scope.go:117] "RemoveContainer" containerID="61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.946024 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac"} err="failed to get container status \"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac\": rpc error: code = NotFound desc = could not find container \"61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac\": container with ID starting with 61fa201f79f5793634533d82e63363c24e1d4aef1717d7c78ae81c1f984c0dac not found: ID does not exist" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.955913 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:37 crc kubenswrapper[5049]: E0127 17:19:37.956739 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerName="nova-metadata-metadata" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.956772 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerName="nova-metadata-metadata" Jan 27 17:19:37 crc kubenswrapper[5049]: E0127 17:19:37.956810 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerName="nova-metadata-log" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.956823 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerName="nova-metadata-log" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.957188 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerName="nova-metadata-metadata" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.957238 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" containerName="nova-metadata-log" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.959199 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.963094 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.963398 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 17:19:37 crc kubenswrapper[5049]: I0127 17:19:37.980251 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.094790 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.095100 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf7p7\" (UniqueName: \"kubernetes.io/projected/2a8df43a-9939-47e0-8a11-b4feacd8ec95-kube-api-access-jf7p7\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.095167 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8df43a-9939-47e0-8a11-b4feacd8ec95-logs\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.095185 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.095279 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-config-data\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.197354 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.197401 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf7p7\" (UniqueName: \"kubernetes.io/projected/2a8df43a-9939-47e0-8a11-b4feacd8ec95-kube-api-access-jf7p7\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.197454 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8df43a-9939-47e0-8a11-b4feacd8ec95-logs\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.197477 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.197508 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-config-data\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.198205 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8df43a-9939-47e0-8a11-b4feacd8ec95-logs\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.201766 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.201818 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.211801 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-config-data\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.218583 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf7p7\" (UniqueName: \"kubernetes.io/projected/2a8df43a-9939-47e0-8a11-b4feacd8ec95-kube-api-access-jf7p7\") pod \"nova-metadata-0\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.280601 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.775767 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:38 crc kubenswrapper[5049]: W0127 17:19:38.777647 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a8df43a_9939_47e0_8a11_b4feacd8ec95.slice/crio-96312df78639d7b2572c4881a58dbfd65bd300c93cde0b28846f32058053871f WatchSource:0}: Error finding container 96312df78639d7b2572c4881a58dbfd65bd300c93cde0b28846f32058053871f: Status 404 returned error can't find the container with id 96312df78639d7b2572c4881a58dbfd65bd300c93cde0b28846f32058053871f Jan 27 17:19:38 crc kubenswrapper[5049]: I0127 17:19:38.888727 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a8df43a-9939-47e0-8a11-b4feacd8ec95","Type":"ContainerStarted","Data":"96312df78639d7b2572c4881a58dbfd65bd300c93cde0b28846f32058053871f"} Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.051271 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.051313 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.279476 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.313996 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.314086 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.342103 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.444725 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.544026 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hjxkn"] Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.544467 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" podUID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" containerName="dnsmasq-dns" containerID="cri-o://1bfb28b1f8e7b3385be794ecd9908288adbec16d1d339ee1e288f822b69582a1" gracePeriod=10 Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.667497 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9cc40c3-d1e6-4a12-912f-be0f724ca019" path="/var/lib/kubelet/pods/a9cc40c3-d1e6-4a12-912f-be0f724ca019/volumes" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.900789 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a8df43a-9939-47e0-8a11-b4feacd8ec95","Type":"ContainerStarted","Data":"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b"} Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.901131 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a8df43a-9939-47e0-8a11-b4feacd8ec95","Type":"ContainerStarted","Data":"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe"} Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.911726 5049 generic.go:334] "Generic (PLEG): container finished" podID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" containerID="1bfb28b1f8e7b3385be794ecd9908288adbec16d1d339ee1e288f822b69582a1" exitCode=0 Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.911787 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" event={"ID":"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325","Type":"ContainerDied","Data":"1bfb28b1f8e7b3385be794ecd9908288adbec16d1d339ee1e288f822b69582a1"} Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.927994 5049 generic.go:334] "Generic (PLEG): container finished" podID="d22525db-6f4e-458d-83c7-c27f295e8363" containerID="87e630809570d66b82043e72d7b0c73f26f019253b501e0c1c593168656b633d" exitCode=0 Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.929117 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4llcp" event={"ID":"d22525db-6f4e-458d-83c7-c27f295e8363","Type":"ContainerDied","Data":"87e630809570d66b82043e72d7b0c73f26f019253b501e0c1c593168656b633d"} Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.929382 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9293700879999998 podStartE2EDuration="2.929370088s" podCreationTimestamp="2026-01-27 17:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:39.926408273 +0000 UTC m=+1355.025381822" watchObservedRunningTime="2026-01-27 17:19:39.929370088 +0000 UTC m=+1355.028343637" Jan 27 17:19:39 crc kubenswrapper[5049]: I0127 17:19:39.971958 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.143921 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.144015 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.183:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.164826 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.261332 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-config\") pod \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.261491 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-nb\") pod \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.261568 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-sb\") pod \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.261620 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-svc\") pod \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.261649 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg2bj\" (UniqueName: \"kubernetes.io/projected/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-kube-api-access-hg2bj\") pod \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.261744 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-swift-storage-0\") pod \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\" (UID: \"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325\") " Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.284411 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-kube-api-access-hg2bj" (OuterVolumeSpecName: "kube-api-access-hg2bj") pod "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" (UID: "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325"). InnerVolumeSpecName "kube-api-access-hg2bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.325899 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" (UID: "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.327378 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" (UID: "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.333397 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" (UID: "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.334796 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-config" (OuterVolumeSpecName: "config") pod "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" (UID: "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.359054 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" (UID: "c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.363837 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.363858 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.363867 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.363879 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.363888 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg2bj\" (UniqueName: \"kubernetes.io/projected/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-kube-api-access-hg2bj\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.363896 5049 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.946508 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" event={"ID":"c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325","Type":"ContainerDied","Data":"4fd885b5b468e56cad03667ddc6a3b7c5a42c486208349e9ac5f67a7aa80113e"} Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.946624 5049 scope.go:117] "RemoveContainer" containerID="1bfb28b1f8e7b3385be794ecd9908288adbec16d1d339ee1e288f822b69582a1" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.946521 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-hjxkn" Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.949391 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e27cd2f-9407-4444-9914-9892b1e41d13" containerID="563de14929d078dd19bf2cb77b128291a6d18e7e9090227946c7b7340017db70" exitCode=0 Jan 27 17:19:40 crc kubenswrapper[5049]: I0127 17:19:40.949548 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d89x5" event={"ID":"4e27cd2f-9407-4444-9914-9892b1e41d13","Type":"ContainerDied","Data":"563de14929d078dd19bf2cb77b128291a6d18e7e9090227946c7b7340017db70"} Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.014915 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hjxkn"] Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.018409 5049 scope.go:117] "RemoveContainer" containerID="db361407212f361d4bd640c5d3094d114a1f36086e617ff02576af1e7a67a5a9" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.023502 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-hjxkn"] Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.375233 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.484984 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-combined-ca-bundle\") pod \"d22525db-6f4e-458d-83c7-c27f295e8363\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.485100 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-config-data\") pod \"d22525db-6f4e-458d-83c7-c27f295e8363\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.485286 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-scripts\") pod \"d22525db-6f4e-458d-83c7-c27f295e8363\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.485337 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfjwn\" (UniqueName: \"kubernetes.io/projected/d22525db-6f4e-458d-83c7-c27f295e8363-kube-api-access-bfjwn\") pod \"d22525db-6f4e-458d-83c7-c27f295e8363\" (UID: \"d22525db-6f4e-458d-83c7-c27f295e8363\") " Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.492059 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-scripts" (OuterVolumeSpecName: "scripts") pod "d22525db-6f4e-458d-83c7-c27f295e8363" (UID: "d22525db-6f4e-458d-83c7-c27f295e8363"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.492687 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22525db-6f4e-458d-83c7-c27f295e8363-kube-api-access-bfjwn" (OuterVolumeSpecName: "kube-api-access-bfjwn") pod "d22525db-6f4e-458d-83c7-c27f295e8363" (UID: "d22525db-6f4e-458d-83c7-c27f295e8363"). InnerVolumeSpecName "kube-api-access-bfjwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.520270 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-config-data" (OuterVolumeSpecName: "config-data") pod "d22525db-6f4e-458d-83c7-c27f295e8363" (UID: "d22525db-6f4e-458d-83c7-c27f295e8363"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.527854 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d22525db-6f4e-458d-83c7-c27f295e8363" (UID: "d22525db-6f4e-458d-83c7-c27f295e8363"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.586937 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.586974 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfjwn\" (UniqueName: \"kubernetes.io/projected/d22525db-6f4e-458d-83c7-c27f295e8363-kube-api-access-bfjwn\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.586986 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.586995 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d22525db-6f4e-458d-83c7-c27f295e8363-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.657255 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" path="/var/lib/kubelet/pods/c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325/volumes" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.961612 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4llcp" event={"ID":"d22525db-6f4e-458d-83c7-c27f295e8363","Type":"ContainerDied","Data":"b2048a223142b4b8a58cafa3c5a7b27be1f9e40f978487a9d29a85208b29b293"} Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.962045 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2048a223142b4b8a58cafa3c5a7b27be1f9e40f978487a9d29a85208b29b293" Jan 27 17:19:41 crc kubenswrapper[5049]: I0127 17:19:41.962116 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4llcp" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.202638 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.250632 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.251461 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-log" containerID="cri-o://f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c" gracePeriod=30 Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.251574 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-api" containerID="cri-o://642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd" gracePeriod=30 Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.298489 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.298724 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerName="nova-metadata-log" containerID="cri-o://4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe" gracePeriod=30 Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.298851 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerName="nova-metadata-metadata" containerID="cri-o://f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b" gracePeriod=30 Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.594702 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.715143 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65p8p\" (UniqueName: \"kubernetes.io/projected/4e27cd2f-9407-4444-9914-9892b1e41d13-kube-api-access-65p8p\") pod \"4e27cd2f-9407-4444-9914-9892b1e41d13\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.715242 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-scripts\") pod \"4e27cd2f-9407-4444-9914-9892b1e41d13\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.715350 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-config-data\") pod \"4e27cd2f-9407-4444-9914-9892b1e41d13\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.715393 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-combined-ca-bundle\") pod \"4e27cd2f-9407-4444-9914-9892b1e41d13\" (UID: \"4e27cd2f-9407-4444-9914-9892b1e41d13\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.724939 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e27cd2f-9407-4444-9914-9892b1e41d13-kube-api-access-65p8p" (OuterVolumeSpecName: "kube-api-access-65p8p") pod "4e27cd2f-9407-4444-9914-9892b1e41d13" (UID: "4e27cd2f-9407-4444-9914-9892b1e41d13"). InnerVolumeSpecName "kube-api-access-65p8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.730243 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-scripts" (OuterVolumeSpecName: "scripts") pod "4e27cd2f-9407-4444-9914-9892b1e41d13" (UID: "4e27cd2f-9407-4444-9914-9892b1e41d13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.766863 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e27cd2f-9407-4444-9914-9892b1e41d13" (UID: "4e27cd2f-9407-4444-9914-9892b1e41d13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.768629 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-config-data" (OuterVolumeSpecName: "config-data") pod "4e27cd2f-9407-4444-9914-9892b1e41d13" (UID: "4e27cd2f-9407-4444-9914-9892b1e41d13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.817442 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.817476 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.817490 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65p8p\" (UniqueName: \"kubernetes.io/projected/4e27cd2f-9407-4444-9914-9892b1e41d13-kube-api-access-65p8p\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.817500 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e27cd2f-9407-4444-9914-9892b1e41d13-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.866593 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.918950 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-combined-ca-bundle\") pod \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.919066 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-config-data\") pod \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.919102 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-nova-metadata-tls-certs\") pod \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.919137 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf7p7\" (UniqueName: \"kubernetes.io/projected/2a8df43a-9939-47e0-8a11-b4feacd8ec95-kube-api-access-jf7p7\") pod \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.919221 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8df43a-9939-47e0-8a11-b4feacd8ec95-logs\") pod \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\" (UID: \"2a8df43a-9939-47e0-8a11-b4feacd8ec95\") " Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.920100 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a8df43a-9939-47e0-8a11-b4feacd8ec95-logs" (OuterVolumeSpecName: "logs") pod "2a8df43a-9939-47e0-8a11-b4feacd8ec95" (UID: "2a8df43a-9939-47e0-8a11-b4feacd8ec95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.922476 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8df43a-9939-47e0-8a11-b4feacd8ec95-kube-api-access-jf7p7" (OuterVolumeSpecName: "kube-api-access-jf7p7") pod "2a8df43a-9939-47e0-8a11-b4feacd8ec95" (UID: "2a8df43a-9939-47e0-8a11-b4feacd8ec95"). InnerVolumeSpecName "kube-api-access-jf7p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.949557 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a8df43a-9939-47e0-8a11-b4feacd8ec95" (UID: "2a8df43a-9939-47e0-8a11-b4feacd8ec95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.963386 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-config-data" (OuterVolumeSpecName: "config-data") pod "2a8df43a-9939-47e0-8a11-b4feacd8ec95" (UID: "2a8df43a-9939-47e0-8a11-b4feacd8ec95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.975266 5049 generic.go:334] "Generic (PLEG): container finished" podID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerID="f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b" exitCode=0 Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.975301 5049 generic.go:334] "Generic (PLEG): container finished" podID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerID="4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe" exitCode=143 Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.975311 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a8df43a-9939-47e0-8a11-b4feacd8ec95","Type":"ContainerDied","Data":"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b"} Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.975353 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a8df43a-9939-47e0-8a11-b4feacd8ec95","Type":"ContainerDied","Data":"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe"} Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.975365 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a8df43a-9939-47e0-8a11-b4feacd8ec95","Type":"ContainerDied","Data":"96312df78639d7b2572c4881a58dbfd65bd300c93cde0b28846f32058053871f"} Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.975348 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.975376 5049 scope.go:117] "RemoveContainer" containerID="f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.978881 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-d89x5" event={"ID":"4e27cd2f-9407-4444-9914-9892b1e41d13","Type":"ContainerDied","Data":"e258f2a80c9c7cbc4063e55334355d6b1ec2d82dfd23bf3d2aa4efcec911f3d6"} Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.978869 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-d89x5" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.978913 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e258f2a80c9c7cbc4063e55334355d6b1ec2d82dfd23bf3d2aa4efcec911f3d6" Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.981125 5049 generic.go:334] "Generic (PLEG): container finished" podID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerID="f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c" exitCode=143 Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.981253 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" containerName="nova-scheduler-scheduler" containerID="cri-o://64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94" gracePeriod=30 Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.981312 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486","Type":"ContainerDied","Data":"f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c"} Jan 27 17:19:42 crc kubenswrapper[5049]: I0127 17:19:42.988432 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2a8df43a-9939-47e0-8a11-b4feacd8ec95" (UID: "2a8df43a-9939-47e0-8a11-b4feacd8ec95"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.007006 5049 scope.go:117] "RemoveContainer" containerID="4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.022177 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf7p7\" (UniqueName: \"kubernetes.io/projected/2a8df43a-9939-47e0-8a11-b4feacd8ec95-kube-api-access-jf7p7\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.022222 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a8df43a-9939-47e0-8a11-b4feacd8ec95-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.022239 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.022251 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.022263 5049 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a8df43a-9939-47e0-8a11-b4feacd8ec95-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.034193 5049 scope.go:117] "RemoveContainer" containerID="f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b" Jan 27 17:19:43 crc kubenswrapper[5049]: E0127 17:19:43.035478 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b\": container with ID starting with f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b not found: ID does not exist" containerID="f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.035520 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b"} err="failed to get container status \"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b\": rpc error: code = NotFound desc = could not find container \"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b\": container with ID starting with f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b not found: ID does not exist" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.035548 5049 scope.go:117] "RemoveContainer" containerID="4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe" Jan 27 17:19:43 crc kubenswrapper[5049]: E0127 17:19:43.035973 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe\": container with ID starting with 4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe not found: ID does not exist" containerID="4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.036032 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe"} err="failed to get container status \"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe\": rpc error: code = NotFound desc = could not find container \"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe\": container with ID starting with 4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe not found: ID does not exist" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.036081 5049 scope.go:117] "RemoveContainer" containerID="f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.036539 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b"} err="failed to get container status \"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b\": rpc error: code = NotFound desc = could not find container \"f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b\": container with ID starting with f5bbf8cdf28ac75081e9ad58177c72a64452e89de8986288c85f1a7e5885000b not found: ID does not exist" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.036645 5049 scope.go:117] "RemoveContainer" containerID="4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.036980 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe"} err="failed to get container status \"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe\": rpc error: code = NotFound desc = could not find container \"4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe\": container with ID starting with 4c455f07f139892f945ed7c608c7efcdb2b4f5e683172c990902df500e3af4fe not found: ID does not exist" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.066751 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 17:19:43 crc kubenswrapper[5049]: E0127 17:19:43.067397 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" containerName="init" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.067480 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" containerName="init" Jan 27 17:19:43 crc kubenswrapper[5049]: E0127 17:19:43.067561 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e27cd2f-9407-4444-9914-9892b1e41d13" containerName="nova-cell1-conductor-db-sync" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.067615 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e27cd2f-9407-4444-9914-9892b1e41d13" containerName="nova-cell1-conductor-db-sync" Jan 27 17:19:43 crc kubenswrapper[5049]: E0127 17:19:43.067691 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22525db-6f4e-458d-83c7-c27f295e8363" containerName="nova-manage" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.067747 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22525db-6f4e-458d-83c7-c27f295e8363" containerName="nova-manage" Jan 27 17:19:43 crc kubenswrapper[5049]: E0127 17:19:43.067835 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerName="nova-metadata-metadata" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.067890 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerName="nova-metadata-metadata" Jan 27 17:19:43 crc kubenswrapper[5049]: E0127 17:19:43.067952 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" containerName="dnsmasq-dns" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.068003 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" containerName="dnsmasq-dns" Jan 27 17:19:43 crc kubenswrapper[5049]: E0127 17:19:43.068057 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerName="nova-metadata-log" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.068107 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerName="nova-metadata-log" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.068335 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e27cd2f-9407-4444-9914-9892b1e41d13" containerName="nova-cell1-conductor-db-sync" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.068409 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerName="nova-metadata-log" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.068475 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22525db-6f4e-458d-83c7-c27f295e8363" containerName="nova-manage" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.068564 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b2bdc1-ee4b-41d8-bc60-d25fbd4fa325" containerName="dnsmasq-dns" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.068629 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" containerName="nova-metadata-metadata" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.069287 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.074193 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.078496 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.123727 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.123796 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwnq\" (UniqueName: \"kubernetes.io/projected/22dc9694-6c5e-4ac3-99e3-910dac92573a-kube-api-access-7mwnq\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.124021 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.225940 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.226267 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.226292 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mwnq\" (UniqueName: \"kubernetes.io/projected/22dc9694-6c5e-4ac3-99e3-910dac92573a-kube-api-access-7mwnq\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.232481 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.232762 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.250656 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mwnq\" (UniqueName: \"kubernetes.io/projected/22dc9694-6c5e-4ac3-99e3-910dac92573a-kube-api-access-7mwnq\") pod \"nova-cell1-conductor-0\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.323696 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.331500 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.363125 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.364653 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.366474 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.368239 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.373919 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.402213 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.430394 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-config-data\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.430449 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192a8418-7767-47e2-9171-a79a3a0c52e8-logs\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.430481 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74wdg\" (UniqueName: \"kubernetes.io/projected/192a8418-7767-47e2-9171-a79a3a0c52e8-kube-api-access-74wdg\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.430517 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.430538 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.532529 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-config-data\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.532607 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192a8418-7767-47e2-9171-a79a3a0c52e8-logs\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.532795 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74wdg\" (UniqueName: \"kubernetes.io/projected/192a8418-7767-47e2-9171-a79a3a0c52e8-kube-api-access-74wdg\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.532850 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.533232 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192a8418-7767-47e2-9171-a79a3a0c52e8-logs\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.533327 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.539077 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.540813 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.541505 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-config-data\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.557508 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74wdg\" (UniqueName: \"kubernetes.io/projected/192a8418-7767-47e2-9171-a79a3a0c52e8-kube-api-access-74wdg\") pod \"nova-metadata-0\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.676900 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a8df43a-9939-47e0-8a11-b4feacd8ec95" path="/var/lib/kubelet/pods/2a8df43a-9939-47e0-8a11-b4feacd8ec95/volumes" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.683559 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:19:43 crc kubenswrapper[5049]: I0127 17:19:43.859025 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 17:19:44 crc kubenswrapper[5049]: I0127 17:19:44.011768 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22dc9694-6c5e-4ac3-99e3-910dac92573a","Type":"ContainerStarted","Data":"5c22ad211a63886705bf0766f5de3774ec9c809eb6079254f8059f8f06f05170"} Jan 27 17:19:44 crc kubenswrapper[5049]: W0127 17:19:44.171874 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod192a8418_7767_47e2_9171_a79a3a0c52e8.slice/crio-7fa251fe134fa8de7ad16d127f6de5305cd69e867cadbb2c545cb4e77b2ddc5c WatchSource:0}: Error finding container 7fa251fe134fa8de7ad16d127f6de5305cd69e867cadbb2c545cb4e77b2ddc5c: Status 404 returned error can't find the container with id 7fa251fe134fa8de7ad16d127f6de5305cd69e867cadbb2c545cb4e77b2ddc5c Jan 27 17:19:44 crc kubenswrapper[5049]: I0127 17:19:44.173856 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:19:44 crc kubenswrapper[5049]: E0127 17:19:44.316514 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 17:19:44 crc kubenswrapper[5049]: E0127 17:19:44.317616 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 17:19:44 crc kubenswrapper[5049]: E0127 17:19:44.319499 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 17:19:44 crc kubenswrapper[5049]: E0127 17:19:44.319794 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" containerName="nova-scheduler-scheduler" Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.045183 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22dc9694-6c5e-4ac3-99e3-910dac92573a","Type":"ContainerStarted","Data":"abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38"} Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.046812 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.051316 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"192a8418-7767-47e2-9171-a79a3a0c52e8","Type":"ContainerStarted","Data":"37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac"} Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.051379 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"192a8418-7767-47e2-9171-a79a3a0c52e8","Type":"ContainerStarted","Data":"c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78"} Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.051412 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"192a8418-7767-47e2-9171-a79a3a0c52e8","Type":"ContainerStarted","Data":"7fa251fe134fa8de7ad16d127f6de5305cd69e867cadbb2c545cb4e77b2ddc5c"} Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.070539 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.070512425 podStartE2EDuration="2.070512425s" podCreationTimestamp="2026-01-27 17:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:45.069879787 +0000 UTC m=+1360.168853336" watchObservedRunningTime="2026-01-27 17:19:45.070512425 +0000 UTC m=+1360.169486014" Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.874484 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.900944 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.900924899 podStartE2EDuration="2.900924899s" podCreationTimestamp="2026-01-27 17:19:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:45.102932182 +0000 UTC m=+1360.201905771" watchObservedRunningTime="2026-01-27 17:19:45.900924899 +0000 UTC m=+1360.999898448" Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.992056 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-config-data\") pod \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.992171 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p87cv\" (UniqueName: \"kubernetes.io/projected/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-kube-api-access-p87cv\") pod \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.992353 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-logs\") pod \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.992419 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-combined-ca-bundle\") pod \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\" (UID: \"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486\") " Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.993061 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-logs" (OuterVolumeSpecName: "logs") pod "b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" (UID: "b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:19:45 crc kubenswrapper[5049]: I0127 17:19:45.997350 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-kube-api-access-p87cv" (OuterVolumeSpecName: "kube-api-access-p87cv") pod "b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" (UID: "b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486"). InnerVolumeSpecName "kube-api-access-p87cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.025331 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" (UID: "b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.032798 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-config-data" (OuterVolumeSpecName: "config-data") pod "b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" (UID: "b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.061488 5049 generic.go:334] "Generic (PLEG): container finished" podID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerID="642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd" exitCode=0 Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.062520 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.067949 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486","Type":"ContainerDied","Data":"642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd"} Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.068001 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486","Type":"ContainerDied","Data":"96bde3089afdf6f78db7413a30d5baca2af24db54e37202934d026e1752f7297"} Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.068024 5049 scope.go:117] "RemoveContainer" containerID="642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.094626 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.094688 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p87cv\" (UniqueName: \"kubernetes.io/projected/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-kube-api-access-p87cv\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.094704 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.094716 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.123534 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.127104 5049 scope.go:117] "RemoveContainer" containerID="f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.135834 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.148617 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:46 crc kubenswrapper[5049]: E0127 17:19:46.149290 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-log" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.149319 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-log" Jan 27 17:19:46 crc kubenswrapper[5049]: E0127 17:19:46.149362 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-api" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.149376 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-api" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.149690 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-api" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.149719 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" containerName="nova-api-log" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.151377 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.158492 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.160293 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.164530 5049 scope.go:117] "RemoveContainer" containerID="642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd" Jan 27 17:19:46 crc kubenswrapper[5049]: E0127 17:19:46.167175 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd\": container with ID starting with 642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd not found: ID does not exist" containerID="642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.167259 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd"} err="failed to get container status \"642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd\": rpc error: code = NotFound desc = could not find container \"642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd\": container with ID starting with 642287ee3807371c7c94cce320d50ccac072020f5af56b7ffc1275383d8d87bd not found: ID does not exist" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.167287 5049 scope.go:117] "RemoveContainer" containerID="f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c" Jan 27 17:19:46 crc kubenswrapper[5049]: E0127 17:19:46.170220 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c\": container with ID starting with f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c not found: ID does not exist" containerID="f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.170258 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c"} err="failed to get container status \"f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c\": rpc error: code = NotFound desc = could not find container \"f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c\": container with ID starting with f034a42c80a71ded5c45cface06423a50b753f857ad67603d6e62bdf0890ae9c not found: ID does not exist" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.300408 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-config-data\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.300489 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe02789-7401-49ef-9fa4-f79382894ccc-logs\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.300527 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.300608 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dt9\" (UniqueName: \"kubernetes.io/projected/abe02789-7401-49ef-9fa4-f79382894ccc-kube-api-access-92dt9\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.401943 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-config-data\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.402023 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe02789-7401-49ef-9fa4-f79382894ccc-logs\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.402054 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.402143 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92dt9\" (UniqueName: \"kubernetes.io/projected/abe02789-7401-49ef-9fa4-f79382894ccc-kube-api-access-92dt9\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.402782 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe02789-7401-49ef-9fa4-f79382894ccc-logs\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.406841 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.407049 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-config-data\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.420452 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92dt9\" (UniqueName: \"kubernetes.io/projected/abe02789-7401-49ef-9fa4-f79382894ccc-kube-api-access-92dt9\") pod \"nova-api-0\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.475351 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:19:46 crc kubenswrapper[5049]: I0127 17:19:46.919994 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:19:46 crc kubenswrapper[5049]: W0127 17:19:46.924637 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe02789_7401_49ef_9fa4_f79382894ccc.slice/crio-2105cbc96338b12919210628406308a3d32bbddf7f21d6ed3011b3f11939387a WatchSource:0}: Error finding container 2105cbc96338b12919210628406308a3d32bbddf7f21d6ed3011b3f11939387a: Status 404 returned error can't find the container with id 2105cbc96338b12919210628406308a3d32bbddf7f21d6ed3011b3f11939387a Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.069829 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abe02789-7401-49ef-9fa4-f79382894ccc","Type":"ContainerStarted","Data":"2105cbc96338b12919210628406308a3d32bbddf7f21d6ed3011b3f11939387a"} Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.631964 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.659319 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486" path="/var/lib/kubelet/pods/b7ebfc8a-c809-41a6-9c5f-cd7cc17c6486/volumes" Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.750438 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mvt4\" (UniqueName: \"kubernetes.io/projected/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-kube-api-access-7mvt4\") pod \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.750842 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data\") pod \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.750872 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-combined-ca-bundle\") pod \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.757928 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-kube-api-access-7mvt4" (OuterVolumeSpecName: "kube-api-access-7mvt4") pod "7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" (UID: "7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a"). InnerVolumeSpecName "kube-api-access-7mvt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:47 crc kubenswrapper[5049]: E0127 17:19:47.779048 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data podName:7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a nodeName:}" failed. No retries permitted until 2026-01-27 17:19:48.279023455 +0000 UTC m=+1363.377997004 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data") pod "7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" (UID: "7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a") : error deleting /var/lib/kubelet/pods/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a/volume-subpaths: remove /var/lib/kubelet/pods/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a/volume-subpaths: no such file or directory Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.781807 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" (UID: "7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.853119 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mvt4\" (UniqueName: \"kubernetes.io/projected/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-kube-api-access-7mvt4\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:47 crc kubenswrapper[5049]: I0127 17:19:47.853167 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.081176 5049 generic.go:334] "Generic (PLEG): container finished" podID="7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" containerID="64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94" exitCode=0 Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.081232 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a","Type":"ContainerDied","Data":"64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94"} Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.081260 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a","Type":"ContainerDied","Data":"46a82ccf749386b64944e5ffa3ee5dd06c4536d0378f16645790ddc28f289f6b"} Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.081275 5049 scope.go:117] "RemoveContainer" containerID="64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.081369 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.085430 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abe02789-7401-49ef-9fa4-f79382894ccc","Type":"ContainerStarted","Data":"7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b"} Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.085465 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abe02789-7401-49ef-9fa4-f79382894ccc","Type":"ContainerStarted","Data":"003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522"} Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.108903 5049 scope.go:117] "RemoveContainer" containerID="64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94" Jan 27 17:19:48 crc kubenswrapper[5049]: E0127 17:19:48.109371 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94\": container with ID starting with 64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94 not found: ID does not exist" containerID="64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.109424 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94"} err="failed to get container status \"64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94\": rpc error: code = NotFound desc = could not find container \"64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94\": container with ID starting with 64c97de61bd4a4d142bda0bbbea82a7693a65570bc78a54a7dbb0ce8d9f23f94 not found: ID does not exist" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.109717 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.109656012 podStartE2EDuration="2.109656012s" podCreationTimestamp="2026-01-27 17:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:48.107408458 +0000 UTC m=+1363.206382007" watchObservedRunningTime="2026-01-27 17:19:48.109656012 +0000 UTC m=+1363.208629561" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.362370 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data\") pod \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\" (UID: \"7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a\") " Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.375650 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data" (OuterVolumeSpecName: "config-data") pod "7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" (UID: "7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.461451 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.465201 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.467533 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.493162 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:48 crc kubenswrapper[5049]: E0127 17:19:48.493613 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" containerName="nova-scheduler-scheduler" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.493628 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" containerName="nova-scheduler-scheduler" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.494791 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" containerName="nova-scheduler-scheduler" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.496055 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.498877 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.502577 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.668736 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.668831 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtsz2\" (UniqueName: \"kubernetes.io/projected/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-kube-api-access-vtsz2\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.668926 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-config-data\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.683772 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.683829 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.770195 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.770561 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtsz2\" (UniqueName: \"kubernetes.io/projected/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-kube-api-access-vtsz2\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.770659 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-config-data\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.775164 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.778534 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-config-data\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.788556 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtsz2\" (UniqueName: \"kubernetes.io/projected/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-kube-api-access-vtsz2\") pod \"nova-scheduler-0\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " pod="openstack/nova-scheduler-0" Jan 27 17:19:48 crc kubenswrapper[5049]: I0127 17:19:48.862925 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:19:49 crc kubenswrapper[5049]: I0127 17:19:49.329790 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:19:49 crc kubenswrapper[5049]: W0127 17:19:49.332954 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e07973_dfb3_40a3_b2b7_22a0e4afb32a.slice/crio-8df0545a21386c0ce6b46fc774eb6b343956f8d97c83085adc5b832db4dfbd45 WatchSource:0}: Error finding container 8df0545a21386c0ce6b46fc774eb6b343956f8d97c83085adc5b832db4dfbd45: Status 404 returned error can't find the container with id 8df0545a21386c0ce6b46fc774eb6b343956f8d97c83085adc5b832db4dfbd45 Jan 27 17:19:49 crc kubenswrapper[5049]: I0127 17:19:49.671759 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a" path="/var/lib/kubelet/pods/7eb095d2-f1a9-4731-9c9a-8a8f50e1e25a/volumes" Jan 27 17:19:50 crc kubenswrapper[5049]: I0127 17:19:50.110405 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07e07973-dfb3-40a3-b2b7-22a0e4afb32a","Type":"ContainerStarted","Data":"0cf29a2180a86eedfb4fb0028ef630d971b0aeb64233f1b8bb2459f9f116e820"} Jan 27 17:19:50 crc kubenswrapper[5049]: I0127 17:19:50.110450 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07e07973-dfb3-40a3-b2b7-22a0e4afb32a","Type":"ContainerStarted","Data":"8df0545a21386c0ce6b46fc774eb6b343956f8d97c83085adc5b832db4dfbd45"} Jan 27 17:19:50 crc kubenswrapper[5049]: I0127 17:19:50.139975 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.139954561 podStartE2EDuration="2.139954561s" podCreationTimestamp="2026-01-27 17:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:19:50.135657408 +0000 UTC m=+1365.234630957" watchObservedRunningTime="2026-01-27 17:19:50.139954561 +0000 UTC m=+1365.238928120" Jan 27 17:19:53 crc kubenswrapper[5049]: I0127 17:19:53.439718 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 17:19:53 crc kubenswrapper[5049]: I0127 17:19:53.683786 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 17:19:53 crc kubenswrapper[5049]: I0127 17:19:53.683825 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 17:19:53 crc kubenswrapper[5049]: I0127 17:19:53.864106 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 17:19:54 crc kubenswrapper[5049]: I0127 17:19:54.699920 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:19:54 crc kubenswrapper[5049]: I0127 17:19:54.699971 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:19:54 crc kubenswrapper[5049]: I0127 17:19:54.954283 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 17:19:56 crc kubenswrapper[5049]: I0127 17:19:56.476020 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 17:19:56 crc kubenswrapper[5049]: I0127 17:19:56.476645 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 17:19:57 crc kubenswrapper[5049]: I0127 17:19:57.476514 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:19:57 crc kubenswrapper[5049]: I0127 17:19:57.516913 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:19:58 crc kubenswrapper[5049]: I0127 17:19:58.691229 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:19:58 crc kubenswrapper[5049]: I0127 17:19:58.691474 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b4962a09-aea5-455e-8620-b83f9ede60e5" containerName="kube-state-metrics" containerID="cri-o://83f6029dc42a366c9752e1aa6f03886c6ce220b7fe1e10f9085bca7560faa674" gracePeriod=30 Jan 27 17:19:58 crc kubenswrapper[5049]: I0127 17:19:58.863118 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 17:19:58 crc kubenswrapper[5049]: I0127 17:19:58.904873 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.196825 5049 generic.go:334] "Generic (PLEG): container finished" podID="b4962a09-aea5-455e-8620-b83f9ede60e5" containerID="83f6029dc42a366c9752e1aa6f03886c6ce220b7fe1e10f9085bca7560faa674" exitCode=2 Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.196904 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4962a09-aea5-455e-8620-b83f9ede60e5","Type":"ContainerDied","Data":"83f6029dc42a366c9752e1aa6f03886c6ce220b7fe1e10f9085bca7560faa674"} Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.197211 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4962a09-aea5-455e-8620-b83f9ede60e5","Type":"ContainerDied","Data":"a4661c69b9373cd4557cfdd633e7ab04376e716ecc51ef8d2dbbf8813e7e5a55"} Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.197255 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4661c69b9373cd4557cfdd633e7ab04376e716ecc51ef8d2dbbf8813e7e5a55" Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.199871 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.235347 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.270554 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz49k\" (UniqueName: \"kubernetes.io/projected/b4962a09-aea5-455e-8620-b83f9ede60e5-kube-api-access-dz49k\") pod \"b4962a09-aea5-455e-8620-b83f9ede60e5\" (UID: \"b4962a09-aea5-455e-8620-b83f9ede60e5\") " Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.281561 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4962a09-aea5-455e-8620-b83f9ede60e5-kube-api-access-dz49k" (OuterVolumeSpecName: "kube-api-access-dz49k") pod "b4962a09-aea5-455e-8620-b83f9ede60e5" (UID: "b4962a09-aea5-455e-8620-b83f9ede60e5"). InnerVolumeSpecName "kube-api-access-dz49k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:19:59 crc kubenswrapper[5049]: I0127 17:19:59.372848 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz49k\" (UniqueName: \"kubernetes.io/projected/b4962a09-aea5-455e-8620-b83f9ede60e5-kube-api-access-dz49k\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.204739 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.230274 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.240058 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.254637 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:20:00 crc kubenswrapper[5049]: E0127 17:20:00.255077 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4962a09-aea5-455e-8620-b83f9ede60e5" containerName="kube-state-metrics" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.255097 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4962a09-aea5-455e-8620-b83f9ede60e5" containerName="kube-state-metrics" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.255266 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4962a09-aea5-455e-8620-b83f9ede60e5" containerName="kube-state-metrics" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.255861 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.258729 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.258839 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.264026 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.389874 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78t4t\" (UniqueName: \"kubernetes.io/projected/b915091f-1f89-4602-8b1f-2214883644e0-kube-api-access-78t4t\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.389933 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.389974 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.390064 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.492206 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.492377 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.492465 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78t4t\" (UniqueName: \"kubernetes.io/projected/b915091f-1f89-4602-8b1f-2214883644e0-kube-api-access-78t4t\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.492509 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.496500 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.500178 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.502398 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.509008 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78t4t\" (UniqueName: \"kubernetes.io/projected/b915091f-1f89-4602-8b1f-2214883644e0-kube-api-access-78t4t\") pod \"kube-state-metrics-0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.575935 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.638512 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.638785 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="ceilometer-central-agent" containerID="cri-o://0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72" gracePeriod=30 Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.638820 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="proxy-httpd" containerID="cri-o://59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0" gracePeriod=30 Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.638907 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="sg-core" containerID="cri-o://3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8" gracePeriod=30 Jan 27 17:20:00 crc kubenswrapper[5049]: I0127 17:20:00.638921 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="ceilometer-notification-agent" containerID="cri-o://ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830" gracePeriod=30 Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.083941 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:20:01 crc kubenswrapper[5049]: W0127 17:20:01.093682 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb915091f_1f89_4602_8b1f_2214883644e0.slice/crio-a8b03b5e811b51adb351ce218a471482ed94ea5e53b0a5591d871f7870a3bcc1 WatchSource:0}: Error finding container a8b03b5e811b51adb351ce218a471482ed94ea5e53b0a5591d871f7870a3bcc1: Status 404 returned error can't find the container with id a8b03b5e811b51adb351ce218a471482ed94ea5e53b0a5591d871f7870a3bcc1 Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.215725 5049 generic.go:334] "Generic (PLEG): container finished" podID="106b6683-cac6-4291-b3fd-73259bc511c3" containerID="59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0" exitCode=0 Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.215761 5049 generic.go:334] "Generic (PLEG): container finished" podID="106b6683-cac6-4291-b3fd-73259bc511c3" containerID="3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8" exitCode=2 Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.215774 5049 generic.go:334] "Generic (PLEG): container finished" podID="106b6683-cac6-4291-b3fd-73259bc511c3" containerID="0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72" exitCode=0 Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.215780 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerDied","Data":"59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0"} Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.215811 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerDied","Data":"3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8"} Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.215820 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerDied","Data":"0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72"} Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.218606 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b915091f-1f89-4602-8b1f-2214883644e0","Type":"ContainerStarted","Data":"a8b03b5e811b51adb351ce218a471482ed94ea5e53b0a5591d871f7870a3bcc1"} Jan 27 17:20:01 crc kubenswrapper[5049]: I0127 17:20:01.656234 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4962a09-aea5-455e-8620-b83f9ede60e5" path="/var/lib/kubelet/pods/b4962a09-aea5-455e-8620-b83f9ede60e5/volumes" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.005771 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.120662 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-config-data\") pod \"106b6683-cac6-4291-b3fd-73259bc511c3\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.120737 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-log-httpd\") pod \"106b6683-cac6-4291-b3fd-73259bc511c3\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.120796 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-combined-ca-bundle\") pod \"106b6683-cac6-4291-b3fd-73259bc511c3\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.120889 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-sg-core-conf-yaml\") pod \"106b6683-cac6-4291-b3fd-73259bc511c3\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.120942 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-run-httpd\") pod \"106b6683-cac6-4291-b3fd-73259bc511c3\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.120985 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-scripts\") pod \"106b6683-cac6-4291-b3fd-73259bc511c3\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.121034 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ngrx\" (UniqueName: \"kubernetes.io/projected/106b6683-cac6-4291-b3fd-73259bc511c3-kube-api-access-2ngrx\") pod \"106b6683-cac6-4291-b3fd-73259bc511c3\" (UID: \"106b6683-cac6-4291-b3fd-73259bc511c3\") " Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.121320 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "106b6683-cac6-4291-b3fd-73259bc511c3" (UID: "106b6683-cac6-4291-b3fd-73259bc511c3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.121890 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "106b6683-cac6-4291-b3fd-73259bc511c3" (UID: "106b6683-cac6-4291-b3fd-73259bc511c3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.126951 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/106b6683-cac6-4291-b3fd-73259bc511c3-kube-api-access-2ngrx" (OuterVolumeSpecName: "kube-api-access-2ngrx") pod "106b6683-cac6-4291-b3fd-73259bc511c3" (UID: "106b6683-cac6-4291-b3fd-73259bc511c3"). InnerVolumeSpecName "kube-api-access-2ngrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.127298 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-scripts" (OuterVolumeSpecName: "scripts") pod "106b6683-cac6-4291-b3fd-73259bc511c3" (UID: "106b6683-cac6-4291-b3fd-73259bc511c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.152261 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "106b6683-cac6-4291-b3fd-73259bc511c3" (UID: "106b6683-cac6-4291-b3fd-73259bc511c3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.222536 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ngrx\" (UniqueName: \"kubernetes.io/projected/106b6683-cac6-4291-b3fd-73259bc511c3-kube-api-access-2ngrx\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.222564 5049 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.222911 5049 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.222928 5049 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/106b6683-cac6-4291-b3fd-73259bc511c3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.222937 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.231927 5049 generic.go:334] "Generic (PLEG): container finished" podID="106b6683-cac6-4291-b3fd-73259bc511c3" containerID="ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830" exitCode=0 Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.231981 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.232013 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerDied","Data":"ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830"} Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.232047 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"106b6683-cac6-4291-b3fd-73259bc511c3","Type":"ContainerDied","Data":"6fd5cf1fab1449b7439f87927af183378b75984a985cb5e06e6e7d8945f58a8e"} Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.232069 5049 scope.go:117] "RemoveContainer" containerID="59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.234435 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b915091f-1f89-4602-8b1f-2214883644e0","Type":"ContainerStarted","Data":"ff5438e2bba7d976fe7a35950c7d8f3e8815181c6b08e323c26c90c5eef3ef12"} Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.234559 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.236817 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "106b6683-cac6-4291-b3fd-73259bc511c3" (UID: "106b6683-cac6-4291-b3fd-73259bc511c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.259917 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.787212367 podStartE2EDuration="2.259893554s" podCreationTimestamp="2026-01-27 17:20:00 +0000 UTC" firstStartedPulling="2026-01-27 17:20:01.098110214 +0000 UTC m=+1376.197083763" lastFinishedPulling="2026-01-27 17:20:01.570791401 +0000 UTC m=+1376.669764950" observedRunningTime="2026-01-27 17:20:02.254634773 +0000 UTC m=+1377.353608332" watchObservedRunningTime="2026-01-27 17:20:02.259893554 +0000 UTC m=+1377.358867103" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.267886 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-config-data" (OuterVolumeSpecName: "config-data") pod "106b6683-cac6-4291-b3fd-73259bc511c3" (UID: "106b6683-cac6-4291-b3fd-73259bc511c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.268431 5049 scope.go:117] "RemoveContainer" containerID="3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.287490 5049 scope.go:117] "RemoveContainer" containerID="ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.310252 5049 scope.go:117] "RemoveContainer" containerID="0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.324603 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.324992 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/106b6683-cac6-4291-b3fd-73259bc511c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.330960 5049 scope.go:117] "RemoveContainer" containerID="59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0" Jan 27 17:20:02 crc kubenswrapper[5049]: E0127 17:20:02.331551 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0\": container with ID starting with 59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0 not found: ID does not exist" containerID="59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.331624 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0"} err="failed to get container status \"59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0\": rpc error: code = NotFound desc = could not find container \"59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0\": container with ID starting with 59d3238cc70989f942daff70c775ab13b0bdcd4e891173403c92b0b7ded74ba0 not found: ID does not exist" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.331689 5049 scope.go:117] "RemoveContainer" containerID="3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8" Jan 27 17:20:02 crc kubenswrapper[5049]: E0127 17:20:02.332212 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8\": container with ID starting with 3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8 not found: ID does not exist" containerID="3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.332268 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8"} err="failed to get container status \"3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8\": rpc error: code = NotFound desc = could not find container \"3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8\": container with ID starting with 3dc228f00c4608ac53cf0a59ec988cc912c9f4d0c096eecf91ff7a4738304de8 not found: ID does not exist" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.332303 5049 scope.go:117] "RemoveContainer" containerID="ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830" Jan 27 17:20:02 crc kubenswrapper[5049]: E0127 17:20:02.332724 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830\": container with ID starting with ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830 not found: ID does not exist" containerID="ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.332758 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830"} err="failed to get container status \"ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830\": rpc error: code = NotFound desc = could not find container \"ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830\": container with ID starting with ef20f3481756251cb513c5a013353e2bb19fbc6f3e627e561ed4e15231346830 not found: ID does not exist" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.332779 5049 scope.go:117] "RemoveContainer" containerID="0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72" Jan 27 17:20:02 crc kubenswrapper[5049]: E0127 17:20:02.333179 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72\": container with ID starting with 0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72 not found: ID does not exist" containerID="0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.333257 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72"} err="failed to get container status \"0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72\": rpc error: code = NotFound desc = could not find container \"0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72\": container with ID starting with 0b2badcfbd548a1817f16454da7f854e9a38fd084020e64028b0e3e831339c72 not found: ID does not exist" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.600452 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.618712 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.634305 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:02 crc kubenswrapper[5049]: E0127 17:20:02.634828 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="ceilometer-central-agent" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.634850 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="ceilometer-central-agent" Jan 27 17:20:02 crc kubenswrapper[5049]: E0127 17:20:02.634884 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="sg-core" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.634894 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="sg-core" Jan 27 17:20:02 crc kubenswrapper[5049]: E0127 17:20:02.634903 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="ceilometer-notification-agent" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.634912 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="ceilometer-notification-agent" Jan 27 17:20:02 crc kubenswrapper[5049]: E0127 17:20:02.634924 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="proxy-httpd" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.634932 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="proxy-httpd" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.635177 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="ceilometer-notification-agent" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.635209 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="proxy-httpd" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.635231 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="sg-core" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.635242 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" containerName="ceilometer-central-agent" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.637427 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.640345 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.640755 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.640983 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.645945 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.732728 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.732831 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-scripts\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.732986 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqdp\" (UniqueName: \"kubernetes.io/projected/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-kube-api-access-phqdp\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.733022 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-run-httpd\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.733104 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-log-httpd\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.733178 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.733214 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-config-data\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.733238 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.834748 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.834834 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-scripts\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.834964 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phqdp\" (UniqueName: \"kubernetes.io/projected/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-kube-api-access-phqdp\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.835014 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-run-httpd\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.835088 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-log-httpd\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.835173 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.835215 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-config-data\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.835255 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.835738 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-run-httpd\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.835888 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-log-httpd\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.839623 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.840994 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-scripts\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.841440 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-config-data\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.842121 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.845514 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.857725 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phqdp\" (UniqueName: \"kubernetes.io/projected/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-kube-api-access-phqdp\") pod \"ceilometer-0\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " pod="openstack/ceilometer-0" Jan 27 17:20:02 crc kubenswrapper[5049]: I0127 17:20:02.954136 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:03 crc kubenswrapper[5049]: W0127 17:20:03.234288 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfeb6a3e1_1aaa_4ec7_bc23_b27a18ae39e8.slice/crio-2f8abaefe5a6a841f4c6f85196a7b221d16d1dac5df268f18c7dfe846e72c957 WatchSource:0}: Error finding container 2f8abaefe5a6a841f4c6f85196a7b221d16d1dac5df268f18c7dfe846e72c957: Status 404 returned error can't find the container with id 2f8abaefe5a6a841f4c6f85196a7b221d16d1dac5df268f18c7dfe846e72c957 Jan 27 17:20:03 crc kubenswrapper[5049]: I0127 17:20:03.237746 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:03 crc kubenswrapper[5049]: I0127 17:20:03.261826 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerStarted","Data":"2f8abaefe5a6a841f4c6f85196a7b221d16d1dac5df268f18c7dfe846e72c957"} Jan 27 17:20:03 crc kubenswrapper[5049]: I0127 17:20:03.657613 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="106b6683-cac6-4291-b3fd-73259bc511c3" path="/var/lib/kubelet/pods/106b6683-cac6-4291-b3fd-73259bc511c3/volumes" Jan 27 17:20:03 crc kubenswrapper[5049]: I0127 17:20:03.691068 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 17:20:03 crc kubenswrapper[5049]: I0127 17:20:03.691355 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 17:20:03 crc kubenswrapper[5049]: I0127 17:20:03.698120 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 17:20:03 crc kubenswrapper[5049]: I0127 17:20:03.702453 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 17:20:04 crc kubenswrapper[5049]: I0127 17:20:04.274405 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerStarted","Data":"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8"} Jan 27 17:20:05 crc kubenswrapper[5049]: I0127 17:20:05.289393 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerStarted","Data":"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60"} Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.264410 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.300346 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-config-data\") pod \"9dfdec56-8598-48c7-9266-f6b9733e0355\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.300540 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljls7\" (UniqueName: \"kubernetes.io/projected/9dfdec56-8598-48c7-9266-f6b9733e0355-kube-api-access-ljls7\") pod \"9dfdec56-8598-48c7-9266-f6b9733e0355\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.300606 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-combined-ca-bundle\") pod \"9dfdec56-8598-48c7-9266-f6b9733e0355\" (UID: \"9dfdec56-8598-48c7-9266-f6b9733e0355\") " Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.315177 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfdec56-8598-48c7-9266-f6b9733e0355-kube-api-access-ljls7" (OuterVolumeSpecName: "kube-api-access-ljls7") pod "9dfdec56-8598-48c7-9266-f6b9733e0355" (UID: "9dfdec56-8598-48c7-9266-f6b9733e0355"). InnerVolumeSpecName "kube-api-access-ljls7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.330056 5049 generic.go:334] "Generic (PLEG): container finished" podID="9dfdec56-8598-48c7-9266-f6b9733e0355" containerID="af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b" exitCode=137 Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.330301 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9dfdec56-8598-48c7-9266-f6b9733e0355","Type":"ContainerDied","Data":"af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b"} Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.330354 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9dfdec56-8598-48c7-9266-f6b9733e0355","Type":"ContainerDied","Data":"2fe307fff764794627a24036097cbc1648c4205d84c96a593dab6d35b55fdcc6"} Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.330384 5049 scope.go:117] "RemoveContainer" containerID="af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.330663 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.337777 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dfdec56-8598-48c7-9266-f6b9733e0355" (UID: "9dfdec56-8598-48c7-9266-f6b9733e0355"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.339658 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerStarted","Data":"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a"} Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.362358 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-config-data" (OuterVolumeSpecName: "config-data") pod "9dfdec56-8598-48c7-9266-f6b9733e0355" (UID: "9dfdec56-8598-48c7-9266-f6b9733e0355"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.406359 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljls7\" (UniqueName: \"kubernetes.io/projected/9dfdec56-8598-48c7-9266-f6b9733e0355-kube-api-access-ljls7\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.406420 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.406433 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dfdec56-8598-48c7-9266-f6b9733e0355-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.429123 5049 scope.go:117] "RemoveContainer" containerID="af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b" Jan 27 17:20:06 crc kubenswrapper[5049]: E0127 17:20:06.429697 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b\": container with ID starting with af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b not found: ID does not exist" containerID="af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.429735 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b"} err="failed to get container status \"af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b\": rpc error: code = NotFound desc = could not find container \"af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b\": container with ID starting with af7fb8d9d1b1260eba0215eeb0de476e16200572a2e090311c33a055db76705b not found: ID does not exist" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.480037 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.480866 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.485082 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.486466 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.659130 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.666305 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.681331 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:20:06 crc kubenswrapper[5049]: E0127 17:20:06.681791 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfdec56-8598-48c7-9266-f6b9733e0355" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.681808 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfdec56-8598-48c7-9266-f6b9733e0355" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.681973 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfdec56-8598-48c7-9266-f6b9733e0355" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.682556 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.687869 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.687945 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.691269 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.694979 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.817623 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrpd\" (UniqueName: \"kubernetes.io/projected/ee012087-89b0-49aa-bac7-4cd715e80294-kube-api-access-gnrpd\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.817705 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.818006 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.818063 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.818152 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.919852 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnrpd\" (UniqueName: \"kubernetes.io/projected/ee012087-89b0-49aa-bac7-4cd715e80294-kube-api-access-gnrpd\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.919929 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.920012 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.920034 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.920069 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.925298 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.933466 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.940374 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.940398 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.944431 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnrpd\" (UniqueName: \"kubernetes.io/projected/ee012087-89b0-49aa-bac7-4cd715e80294-kube-api-access-gnrpd\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:06 crc kubenswrapper[5049]: I0127 17:20:06.998298 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.237036 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.349535 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee012087-89b0-49aa-bac7-4cd715e80294","Type":"ContainerStarted","Data":"72bb98d846493bdc1f2d04de947b2818340ec8bb1cd1bbb18c71b1042ab20bb3"} Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.351944 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.355382 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.535567 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h7clt"] Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.537407 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.572875 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h7clt"] Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.653595 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.653841 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.654178 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.654218 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.654248 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wdc\" (UniqueName: \"kubernetes.io/projected/f44f2f88-5083-4314-ac57-54597bca9efa-kube-api-access-j2wdc\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.654316 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-config\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.658250 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfdec56-8598-48c7-9266-f6b9733e0355" path="/var/lib/kubelet/pods/9dfdec56-8598-48c7-9266-f6b9733e0355/volumes" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.756348 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.756508 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.756544 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.756572 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wdc\" (UniqueName: \"kubernetes.io/projected/f44f2f88-5083-4314-ac57-54597bca9efa-kube-api-access-j2wdc\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.756609 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-config\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.756683 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.757449 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-config\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.757513 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.757822 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.757995 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.758119 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.778812 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wdc\" (UniqueName: \"kubernetes.io/projected/f44f2f88-5083-4314-ac57-54597bca9efa-kube-api-access-j2wdc\") pod \"dnsmasq-dns-89c5cd4d5-h7clt\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:07 crc kubenswrapper[5049]: I0127 17:20:07.864864 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:08 crc kubenswrapper[5049]: I0127 17:20:08.325803 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h7clt"] Jan 27 17:20:08 crc kubenswrapper[5049]: W0127 17:20:08.339973 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf44f2f88_5083_4314_ac57_54597bca9efa.slice/crio-ce19a9ad63ad7a7d9111b62e35b46b5021e21a31d395ab7826e9b632141dff7b WatchSource:0}: Error finding container ce19a9ad63ad7a7d9111b62e35b46b5021e21a31d395ab7826e9b632141dff7b: Status 404 returned error can't find the container with id ce19a9ad63ad7a7d9111b62e35b46b5021e21a31d395ab7826e9b632141dff7b Jan 27 17:20:08 crc kubenswrapper[5049]: I0127 17:20:08.363876 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerStarted","Data":"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c"} Jan 27 17:20:08 crc kubenswrapper[5049]: I0127 17:20:08.365026 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 17:20:08 crc kubenswrapper[5049]: I0127 17:20:08.368652 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" event={"ID":"f44f2f88-5083-4314-ac57-54597bca9efa","Type":"ContainerStarted","Data":"ce19a9ad63ad7a7d9111b62e35b46b5021e21a31d395ab7826e9b632141dff7b"} Jan 27 17:20:08 crc kubenswrapper[5049]: I0127 17:20:08.370661 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee012087-89b0-49aa-bac7-4cd715e80294","Type":"ContainerStarted","Data":"b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57"} Jan 27 17:20:08 crc kubenswrapper[5049]: I0127 17:20:08.387222 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.466582739 podStartE2EDuration="6.387204196s" podCreationTimestamp="2026-01-27 17:20:02 +0000 UTC" firstStartedPulling="2026-01-27 17:20:03.245404242 +0000 UTC m=+1378.344377791" lastFinishedPulling="2026-01-27 17:20:07.166025699 +0000 UTC m=+1382.264999248" observedRunningTime="2026-01-27 17:20:08.385231759 +0000 UTC m=+1383.484205318" watchObservedRunningTime="2026-01-27 17:20:08.387204196 +0000 UTC m=+1383.486177745" Jan 27 17:20:08 crc kubenswrapper[5049]: I0127 17:20:08.411325 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.411307708 podStartE2EDuration="2.411307708s" podCreationTimestamp="2026-01-27 17:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:20:08.409875157 +0000 UTC m=+1383.508848706" watchObservedRunningTime="2026-01-27 17:20:08.411307708 +0000 UTC m=+1383.510281257" Jan 27 17:20:09 crc kubenswrapper[5049]: I0127 17:20:09.403000 5049 generic.go:334] "Generic (PLEG): container finished" podID="f44f2f88-5083-4314-ac57-54597bca9efa" containerID="6d3929b43b9f145b761b25260c8babd94db0e221820b663ce447af35822c1b0e" exitCode=0 Jan 27 17:20:09 crc kubenswrapper[5049]: I0127 17:20:09.403994 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" event={"ID":"f44f2f88-5083-4314-ac57-54597bca9efa","Type":"ContainerDied","Data":"6d3929b43b9f145b761b25260c8babd94db0e221820b663ce447af35822c1b0e"} Jan 27 17:20:10 crc kubenswrapper[5049]: I0127 17:20:10.343650 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:10 crc kubenswrapper[5049]: I0127 17:20:10.413074 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" event={"ID":"f44f2f88-5083-4314-ac57-54597bca9efa","Type":"ContainerStarted","Data":"0dc2349cdaf0626b81e62b3d1171002e1ba55285be948e70262b59439fb7b6e2"} Jan 27 17:20:10 crc kubenswrapper[5049]: I0127 17:20:10.413232 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-log" containerID="cri-o://003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522" gracePeriod=30 Jan 27 17:20:10 crc kubenswrapper[5049]: I0127 17:20:10.413302 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-api" containerID="cri-o://7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b" gracePeriod=30 Jan 27 17:20:10 crc kubenswrapper[5049]: I0127 17:20:10.445375 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" podStartSLOduration=3.445354424 podStartE2EDuration="3.445354424s" podCreationTimestamp="2026-01-27 17:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:20:10.438454096 +0000 UTC m=+1385.537427645" watchObservedRunningTime="2026-01-27 17:20:10.445354424 +0000 UTC m=+1385.544327963" Jan 27 17:20:10 crc kubenswrapper[5049]: I0127 17:20:10.656984 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.085418 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.424514 5049 generic.go:334] "Generic (PLEG): container finished" podID="abe02789-7401-49ef-9fa4-f79382894ccc" containerID="003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522" exitCode=143 Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.424592 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abe02789-7401-49ef-9fa4-f79382894ccc","Type":"ContainerDied","Data":"003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522"} Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.424798 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="ceilometer-central-agent" containerID="cri-o://12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8" gracePeriod=30 Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.424873 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="sg-core" containerID="cri-o://a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a" gracePeriod=30 Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.424914 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="ceilometer-notification-agent" containerID="cri-o://b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60" gracePeriod=30 Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.425218 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.425978 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="proxy-httpd" containerID="cri-o://b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c" gracePeriod=30 Jan 27 17:20:11 crc kubenswrapper[5049]: I0127 17:20:11.999010 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.198056 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.351562 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-run-httpd\") pod \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.351635 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-config-data\") pod \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.351738 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-ceilometer-tls-certs\") pod \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.351763 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-combined-ca-bundle\") pod \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.351936 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" (UID: "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.352428 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-sg-core-conf-yaml\") pod \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.352524 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-scripts\") pod \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.352555 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phqdp\" (UniqueName: \"kubernetes.io/projected/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-kube-api-access-phqdp\") pod \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.352661 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-log-httpd\") pod \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\" (UID: \"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8\") " Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.353137 5049 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.353407 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" (UID: "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.358663 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-kube-api-access-phqdp" (OuterVolumeSpecName: "kube-api-access-phqdp") pod "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" (UID: "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8"). InnerVolumeSpecName "kube-api-access-phqdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.358724 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-scripts" (OuterVolumeSpecName: "scripts") pod "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" (UID: "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.384913 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" (UID: "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.401643 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" (UID: "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.433688 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" (UID: "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436145 5049 generic.go:334] "Generic (PLEG): container finished" podID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerID="b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c" exitCode=0 Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436233 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436206 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerDied","Data":"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c"} Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436369 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerDied","Data":"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a"} Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436395 5049 scope.go:117] "RemoveContainer" containerID="b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436323 5049 generic.go:334] "Generic (PLEG): container finished" podID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerID="a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a" exitCode=2 Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436438 5049 generic.go:334] "Generic (PLEG): container finished" podID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerID="b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60" exitCode=0 Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436456 5049 generic.go:334] "Generic (PLEG): container finished" podID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerID="12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8" exitCode=0 Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436503 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerDied","Data":"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60"} Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436529 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerDied","Data":"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8"} Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.436541 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8","Type":"ContainerDied","Data":"2f8abaefe5a6a841f4c6f85196a7b221d16d1dac5df268f18c7dfe846e72c957"} Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.455616 5049 scope.go:117] "RemoveContainer" containerID="a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.456738 5049 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.456766 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.456778 5049 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.456807 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.456816 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phqdp\" (UniqueName: \"kubernetes.io/projected/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-kube-api-access-phqdp\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.456825 5049 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.465667 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-config-data" (OuterVolumeSpecName: "config-data") pod "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" (UID: "feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.486397 5049 scope.go:117] "RemoveContainer" containerID="b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.506422 5049 scope.go:117] "RemoveContainer" containerID="12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.529044 5049 scope.go:117] "RemoveContainer" containerID="b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c" Jan 27 17:20:12 crc kubenswrapper[5049]: E0127 17:20:12.529579 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": container with ID starting with b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c not found: ID does not exist" containerID="b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.529625 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c"} err="failed to get container status \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": rpc error: code = NotFound desc = could not find container \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": container with ID starting with b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.529654 5049 scope.go:117] "RemoveContainer" containerID="a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a" Jan 27 17:20:12 crc kubenswrapper[5049]: E0127 17:20:12.530161 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": container with ID starting with a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a not found: ID does not exist" containerID="a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.530187 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a"} err="failed to get container status \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": rpc error: code = NotFound desc = could not find container \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": container with ID starting with a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.530201 5049 scope.go:117] "RemoveContainer" containerID="b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60" Jan 27 17:20:12 crc kubenswrapper[5049]: E0127 17:20:12.530477 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": container with ID starting with b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60 not found: ID does not exist" containerID="b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.530497 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60"} err="failed to get container status \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": rpc error: code = NotFound desc = could not find container \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": container with ID starting with b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60 not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.530517 5049 scope.go:117] "RemoveContainer" containerID="12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8" Jan 27 17:20:12 crc kubenswrapper[5049]: E0127 17:20:12.531032 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": container with ID starting with 12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8 not found: ID does not exist" containerID="12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.531053 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8"} err="failed to get container status \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": rpc error: code = NotFound desc = could not find container \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": container with ID starting with 12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8 not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.531065 5049 scope.go:117] "RemoveContainer" containerID="b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.531344 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c"} err="failed to get container status \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": rpc error: code = NotFound desc = could not find container \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": container with ID starting with b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.531374 5049 scope.go:117] "RemoveContainer" containerID="a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.532026 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a"} err="failed to get container status \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": rpc error: code = NotFound desc = could not find container \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": container with ID starting with a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.532049 5049 scope.go:117] "RemoveContainer" containerID="b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.532463 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60"} err="failed to get container status \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": rpc error: code = NotFound desc = could not find container \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": container with ID starting with b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60 not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.532489 5049 scope.go:117] "RemoveContainer" containerID="12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.532779 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8"} err="failed to get container status \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": rpc error: code = NotFound desc = could not find container \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": container with ID starting with 12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8 not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.532804 5049 scope.go:117] "RemoveContainer" containerID="b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.533062 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c"} err="failed to get container status \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": rpc error: code = NotFound desc = could not find container \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": container with ID starting with b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.533081 5049 scope.go:117] "RemoveContainer" containerID="a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.533362 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a"} err="failed to get container status \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": rpc error: code = NotFound desc = could not find container \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": container with ID starting with a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.533445 5049 scope.go:117] "RemoveContainer" containerID="b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.533718 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60"} err="failed to get container status \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": rpc error: code = NotFound desc = could not find container \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": container with ID starting with b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60 not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.533739 5049 scope.go:117] "RemoveContainer" containerID="12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.534040 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8"} err="failed to get container status \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": rpc error: code = NotFound desc = could not find container \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": container with ID starting with 12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8 not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.534059 5049 scope.go:117] "RemoveContainer" containerID="b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.534336 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c"} err="failed to get container status \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": rpc error: code = NotFound desc = could not find container \"b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c\": container with ID starting with b03ec5174973e5ff75e6667e31055fa9a14d156731554fd46d13999dbf37db0c not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.534450 5049 scope.go:117] "RemoveContainer" containerID="a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.534728 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a"} err="failed to get container status \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": rpc error: code = NotFound desc = could not find container \"a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a\": container with ID starting with a4add7d36c23d3b18a3dad905c799fad0a98f03e34213cc63e4a7082e91d279a not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.534755 5049 scope.go:117] "RemoveContainer" containerID="b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.535065 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60"} err="failed to get container status \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": rpc error: code = NotFound desc = could not find container \"b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60\": container with ID starting with b50cd14853d71a5240006efcf869fd67c3db1e0a450537775d78b1158f97ae60 not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.535164 5049 scope.go:117] "RemoveContainer" containerID="12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.535466 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8"} err="failed to get container status \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": rpc error: code = NotFound desc = could not find container \"12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8\": container with ID starting with 12db7ada64699351a7e4790c64a9b309788c7ea466e67ebb1f95937249d3b9c8 not found: ID does not exist" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.559016 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.802256 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.821181 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.834169 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:12 crc kubenswrapper[5049]: E0127 17:20:12.834647 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="ceilometer-central-agent" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.834686 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="ceilometer-central-agent" Jan 27 17:20:12 crc kubenswrapper[5049]: E0127 17:20:12.834702 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="proxy-httpd" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.834711 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="proxy-httpd" Jan 27 17:20:12 crc kubenswrapper[5049]: E0127 17:20:12.834750 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="ceilometer-notification-agent" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.834759 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="ceilometer-notification-agent" Jan 27 17:20:12 crc kubenswrapper[5049]: E0127 17:20:12.834770 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="sg-core" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.834778 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="sg-core" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.835052 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="sg-core" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.835081 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="proxy-httpd" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.835103 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="ceilometer-central-agent" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.835112 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" containerName="ceilometer-notification-agent" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.837323 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.841413 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.841550 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.841724 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.845140 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.966411 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.966804 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-scripts\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.966851 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.966870 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-run-httpd\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.966888 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-log-httpd\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.966905 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc96s\" (UniqueName: \"kubernetes.io/projected/282879d1-6e20-4ce4-954f-1b081fda112e-kube-api-access-qc96s\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.967012 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-config-data\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:12 crc kubenswrapper[5049]: I0127 17:20:12.967046 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.068586 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.068643 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-scripts\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.068705 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.068724 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-run-httpd\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.068742 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-log-httpd\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.068759 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc96s\" (UniqueName: \"kubernetes.io/projected/282879d1-6e20-4ce4-954f-1b081fda112e-kube-api-access-qc96s\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.068829 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-config-data\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.068914 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.069687 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-run-httpd\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.069857 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-log-httpd\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.073032 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.074117 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-config-data\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.074544 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.074628 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-scripts\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.074808 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.100077 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc96s\" (UniqueName: \"kubernetes.io/projected/282879d1-6e20-4ce4-954f-1b081fda112e-kube-api-access-qc96s\") pod \"ceilometer-0\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.157822 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.203116 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.668063 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8" path="/var/lib/kubelet/pods/feb6a3e1-1aaa-4ec7-bc23-b27a18ae39e8/volumes" Jan 27 17:20:13 crc kubenswrapper[5049]: I0127 17:20:13.677523 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.069234 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.192741 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92dt9\" (UniqueName: \"kubernetes.io/projected/abe02789-7401-49ef-9fa4-f79382894ccc-kube-api-access-92dt9\") pod \"abe02789-7401-49ef-9fa4-f79382894ccc\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.193001 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-combined-ca-bundle\") pod \"abe02789-7401-49ef-9fa4-f79382894ccc\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.193036 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe02789-7401-49ef-9fa4-f79382894ccc-logs\") pod \"abe02789-7401-49ef-9fa4-f79382894ccc\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.193229 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-config-data\") pod \"abe02789-7401-49ef-9fa4-f79382894ccc\" (UID: \"abe02789-7401-49ef-9fa4-f79382894ccc\") " Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.193834 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe02789-7401-49ef-9fa4-f79382894ccc-logs" (OuterVolumeSpecName: "logs") pod "abe02789-7401-49ef-9fa4-f79382894ccc" (UID: "abe02789-7401-49ef-9fa4-f79382894ccc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.207432 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe02789-7401-49ef-9fa4-f79382894ccc-kube-api-access-92dt9" (OuterVolumeSpecName: "kube-api-access-92dt9") pod "abe02789-7401-49ef-9fa4-f79382894ccc" (UID: "abe02789-7401-49ef-9fa4-f79382894ccc"). InnerVolumeSpecName "kube-api-access-92dt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.224103 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abe02789-7401-49ef-9fa4-f79382894ccc" (UID: "abe02789-7401-49ef-9fa4-f79382894ccc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.253000 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-config-data" (OuterVolumeSpecName: "config-data") pod "abe02789-7401-49ef-9fa4-f79382894ccc" (UID: "abe02789-7401-49ef-9fa4-f79382894ccc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.294825 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.294854 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92dt9\" (UniqueName: \"kubernetes.io/projected/abe02789-7401-49ef-9fa4-f79382894ccc-kube-api-access-92dt9\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.294863 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe02789-7401-49ef-9fa4-f79382894ccc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.294874 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abe02789-7401-49ef-9fa4-f79382894ccc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.456436 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerStarted","Data":"479769d7e55cf6e1be82b7adf0530509c3928ffc4dfa3dd5cf8fd9ee86f43827"} Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.456477 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerStarted","Data":"52f04c3046c513ea5283a1fe29963bebc834300cddf8e448414854a773339d46"} Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.457768 5049 generic.go:334] "Generic (PLEG): container finished" podID="abe02789-7401-49ef-9fa4-f79382894ccc" containerID="7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b" exitCode=0 Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.457792 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abe02789-7401-49ef-9fa4-f79382894ccc","Type":"ContainerDied","Data":"7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b"} Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.457806 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"abe02789-7401-49ef-9fa4-f79382894ccc","Type":"ContainerDied","Data":"2105cbc96338b12919210628406308a3d32bbddf7f21d6ed3011b3f11939387a"} Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.457823 5049 scope.go:117] "RemoveContainer" containerID="7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.457942 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.486317 5049 scope.go:117] "RemoveContainer" containerID="003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.488164 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.495732 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.512103 5049 scope.go:117] "RemoveContainer" containerID="7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b" Jan 27 17:20:14 crc kubenswrapper[5049]: E0127 17:20:14.512596 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b\": container with ID starting with 7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b not found: ID does not exist" containerID="7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.512628 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b"} err="failed to get container status \"7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b\": rpc error: code = NotFound desc = could not find container \"7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b\": container with ID starting with 7a60f2b4c7236f8fc9cfa5573998b7a0ab2339659b74ad2a6722d0a266d9fb5b not found: ID does not exist" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.512655 5049 scope.go:117] "RemoveContainer" containerID="003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522" Jan 27 17:20:14 crc kubenswrapper[5049]: E0127 17:20:14.512987 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522\": container with ID starting with 003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522 not found: ID does not exist" containerID="003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.513010 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522"} err="failed to get container status \"003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522\": rpc error: code = NotFound desc = could not find container \"003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522\": container with ID starting with 003e74c50c55ae2e04c081488aecd39f7c57e2a50af4b970a7d9f1fefb8ab522 not found: ID does not exist" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.514736 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:14 crc kubenswrapper[5049]: E0127 17:20:14.515109 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-api" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.515127 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-api" Jan 27 17:20:14 crc kubenswrapper[5049]: E0127 17:20:14.515145 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-log" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.515151 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-log" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.515322 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-api" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.515335 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" containerName="nova-api-log" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.516713 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.519634 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.519929 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.520062 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.532309 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.602045 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-public-tls-certs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.602387 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.602414 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.602479 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-config-data\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.602503 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f21bdd51-3e3c-476c-a746-812ab3df6fb5-logs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.602573 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wkk\" (UniqueName: \"kubernetes.io/projected/f21bdd51-3e3c-476c-a746-812ab3df6fb5-kube-api-access-89wkk\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.704115 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-public-tls-certs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.705262 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.705348 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.705465 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-config-data\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.705550 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f21bdd51-3e3c-476c-a746-812ab3df6fb5-logs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.705736 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89wkk\" (UniqueName: \"kubernetes.io/projected/f21bdd51-3e3c-476c-a746-812ab3df6fb5-kube-api-access-89wkk\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.706162 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f21bdd51-3e3c-476c-a746-812ab3df6fb5-logs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.710869 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-config-data\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.711225 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-public-tls-certs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.711699 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.718307 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.724622 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89wkk\" (UniqueName: \"kubernetes.io/projected/f21bdd51-3e3c-476c-a746-812ab3df6fb5-kube-api-access-89wkk\") pod \"nova-api-0\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " pod="openstack/nova-api-0" Jan 27 17:20:14 crc kubenswrapper[5049]: I0127 17:20:14.839820 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:20:15 crc kubenswrapper[5049]: I0127 17:20:15.337897 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:15 crc kubenswrapper[5049]: I0127 17:20:15.468432 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerStarted","Data":"0fee396ff9988183407a9f48508bdc86941a951c0787478b5250b7d04a9d6e69"} Jan 27 17:20:15 crc kubenswrapper[5049]: I0127 17:20:15.480004 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f21bdd51-3e3c-476c-a746-812ab3df6fb5","Type":"ContainerStarted","Data":"4d54283e3b713e29c0c557687850cb68251f702a02654fd09548b1c05f66d551"} Jan 27 17:20:15 crc kubenswrapper[5049]: I0127 17:20:15.660105 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe02789-7401-49ef-9fa4-f79382894ccc" path="/var/lib/kubelet/pods/abe02789-7401-49ef-9fa4-f79382894ccc/volumes" Jan 27 17:20:16 crc kubenswrapper[5049]: I0127 17:20:16.489309 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerStarted","Data":"3d9e7ba960cceb878019684ddc73cf3da126a46c7a29be95e352a6f5f75ee022"} Jan 27 17:20:16 crc kubenswrapper[5049]: I0127 17:20:16.491479 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f21bdd51-3e3c-476c-a746-812ab3df6fb5","Type":"ContainerStarted","Data":"c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db"} Jan 27 17:20:16 crc kubenswrapper[5049]: I0127 17:20:16.491505 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f21bdd51-3e3c-476c-a746-812ab3df6fb5","Type":"ContainerStarted","Data":"6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2"} Jan 27 17:20:16 crc kubenswrapper[5049]: I0127 17:20:16.520439 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5204215039999998 podStartE2EDuration="2.520421504s" podCreationTimestamp="2026-01-27 17:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:20:16.513164326 +0000 UTC m=+1391.612137895" watchObservedRunningTime="2026-01-27 17:20:16.520421504 +0000 UTC m=+1391.619395053" Jan 27 17:20:16 crc kubenswrapper[5049]: I0127 17:20:16.998659 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.034451 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.534396 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.706117 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-mcq6h"] Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.707688 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.710380 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.710380 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.719695 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mcq6h"] Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.798109 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.798213 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-scripts\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.798262 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgcmc\" (UniqueName: \"kubernetes.io/projected/fb509f66-608c-454f-aef3-2c52323e916b-kube-api-access-pgcmc\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.798721 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-config-data\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.866831 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.907857 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.907977 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-scripts\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.908063 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgcmc\" (UniqueName: \"kubernetes.io/projected/fb509f66-608c-454f-aef3-2c52323e916b-kube-api-access-pgcmc\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.908279 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-config-data\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.918348 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-config-data\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.925637 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-scripts\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.925725 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.959427 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgcmc\" (UniqueName: \"kubernetes.io/projected/fb509f66-608c-454f-aef3-2c52323e916b-kube-api-access-pgcmc\") pod \"nova-cell1-cell-mapping-mcq6h\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.984375 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6bs8c"] Jan 27 17:20:17 crc kubenswrapper[5049]: I0127 17:20:17.984699 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" podUID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" containerName="dnsmasq-dns" containerID="cri-o://00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a" gracePeriod=10 Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.080037 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.507468 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.519240 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerStarted","Data":"f27661ef9801ac8af02d09341f0cbeefecfd032909c2927e72b3c5856291c9f8"} Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.519422 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="ceilometer-central-agent" containerID="cri-o://479769d7e55cf6e1be82b7adf0530509c3928ffc4dfa3dd5cf8fd9ee86f43827" gracePeriod=30 Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.519682 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.519730 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="proxy-httpd" containerID="cri-o://f27661ef9801ac8af02d09341f0cbeefecfd032909c2927e72b3c5856291c9f8" gracePeriod=30 Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.519773 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="sg-core" containerID="cri-o://3d9e7ba960cceb878019684ddc73cf3da126a46c7a29be95e352a6f5f75ee022" gracePeriod=30 Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.519804 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="ceilometer-notification-agent" containerID="cri-o://0fee396ff9988183407a9f48508bdc86941a951c0787478b5250b7d04a9d6e69" gracePeriod=30 Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.542576 5049 generic.go:334] "Generic (PLEG): container finished" podID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" containerID="00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a" exitCode=0 Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.542703 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" event={"ID":"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5","Type":"ContainerDied","Data":"00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a"} Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.542735 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" event={"ID":"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5","Type":"ContainerDied","Data":"29c8f428c2d7e58e7a24d8a188daa6a28a54d6e53fa13e04afd8878c51e5eaa1"} Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.542760 5049 scope.go:117] "RemoveContainer" containerID="00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.542893 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-6bs8c" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.561467 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7586778499999998 podStartE2EDuration="6.561446211s" podCreationTimestamp="2026-01-27 17:20:12 +0000 UTC" firstStartedPulling="2026-01-27 17:20:13.69405804 +0000 UTC m=+1388.793031589" lastFinishedPulling="2026-01-27 17:20:17.496826401 +0000 UTC m=+1392.595799950" observedRunningTime="2026-01-27 17:20:18.553096981 +0000 UTC m=+1393.652070530" watchObservedRunningTime="2026-01-27 17:20:18.561446211 +0000 UTC m=+1393.660419760" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.595863 5049 scope.go:117] "RemoveContainer" containerID="9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.639976 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-sb\") pod \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.640038 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-swift-storage-0\") pod \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.640119 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-svc\") pod \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.640269 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q7bj\" (UniqueName: \"kubernetes.io/projected/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-kube-api-access-9q7bj\") pod \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.640339 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-config\") pod \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.640367 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-nb\") pod \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\" (UID: \"44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5\") " Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.646901 5049 scope.go:117] "RemoveContainer" containerID="00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.653254 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-kube-api-access-9q7bj" (OuterVolumeSpecName: "kube-api-access-9q7bj") pod "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" (UID: "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5"). InnerVolumeSpecName "kube-api-access-9q7bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:18 crc kubenswrapper[5049]: E0127 17:20:18.653500 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a\": container with ID starting with 00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a not found: ID does not exist" containerID="00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.653630 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a"} err="failed to get container status \"00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a\": rpc error: code = NotFound desc = could not find container \"00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a\": container with ID starting with 00afd736563f03f919eb6655adab62c54b2c697ac19fbd08baf82a3d455db34a not found: ID does not exist" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.653657 5049 scope.go:117] "RemoveContainer" containerID="9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776" Jan 27 17:20:18 crc kubenswrapper[5049]: E0127 17:20:18.654221 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776\": container with ID starting with 9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776 not found: ID does not exist" containerID="9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.654256 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776"} err="failed to get container status \"9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776\": rpc error: code = NotFound desc = could not find container \"9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776\": container with ID starting with 9c8440db0ef78ecae49ee6c85a51604fad4316cd9d40ddbcf8eb90f6935bb776 not found: ID does not exist" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.670997 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-mcq6h"] Jan 27 17:20:18 crc kubenswrapper[5049]: W0127 17:20:18.683205 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb509f66_608c_454f_aef3_2c52323e916b.slice/crio-16ef37e70a0c6202ef5f891d1ee995a5770cbad6c709abf4bf2c17bb19137d17 WatchSource:0}: Error finding container 16ef37e70a0c6202ef5f891d1ee995a5770cbad6c709abf4bf2c17bb19137d17: Status 404 returned error can't find the container with id 16ef37e70a0c6202ef5f891d1ee995a5770cbad6c709abf4bf2c17bb19137d17 Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.705481 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" (UID: "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.724158 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" (UID: "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.730374 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" (UID: "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.737180 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" (UID: "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.739521 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-config" (OuterVolumeSpecName: "config") pod "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" (UID: "44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.742994 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q7bj\" (UniqueName: \"kubernetes.io/projected/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-kube-api-access-9q7bj\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.743023 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.743034 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.743047 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.743058 5049 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.743071 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.877337 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6bs8c"] Jan 27 17:20:18 crc kubenswrapper[5049]: I0127 17:20:18.886195 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-6bs8c"] Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.553053 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mcq6h" event={"ID":"fb509f66-608c-454f-aef3-2c52323e916b","Type":"ContainerStarted","Data":"ed46c77a82d243a5da6e3709d5009665adaeef4af279b00988f34600ecd91dc7"} Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.553355 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mcq6h" event={"ID":"fb509f66-608c-454f-aef3-2c52323e916b","Type":"ContainerStarted","Data":"16ef37e70a0c6202ef5f891d1ee995a5770cbad6c709abf4bf2c17bb19137d17"} Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.556701 5049 generic.go:334] "Generic (PLEG): container finished" podID="282879d1-6e20-4ce4-954f-1b081fda112e" containerID="f27661ef9801ac8af02d09341f0cbeefecfd032909c2927e72b3c5856291c9f8" exitCode=0 Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.556738 5049 generic.go:334] "Generic (PLEG): container finished" podID="282879d1-6e20-4ce4-954f-1b081fda112e" containerID="3d9e7ba960cceb878019684ddc73cf3da126a46c7a29be95e352a6f5f75ee022" exitCode=2 Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.556751 5049 generic.go:334] "Generic (PLEG): container finished" podID="282879d1-6e20-4ce4-954f-1b081fda112e" containerID="0fee396ff9988183407a9f48508bdc86941a951c0787478b5250b7d04a9d6e69" exitCode=0 Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.556767 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerDied","Data":"f27661ef9801ac8af02d09341f0cbeefecfd032909c2927e72b3c5856291c9f8"} Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.556829 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerDied","Data":"3d9e7ba960cceb878019684ddc73cf3da126a46c7a29be95e352a6f5f75ee022"} Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.556846 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerDied","Data":"0fee396ff9988183407a9f48508bdc86941a951c0787478b5250b7d04a9d6e69"} Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.568917 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-mcq6h" podStartSLOduration=2.56889943 podStartE2EDuration="2.56889943s" podCreationTimestamp="2026-01-27 17:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:20:19.56752446 +0000 UTC m=+1394.666498009" watchObservedRunningTime="2026-01-27 17:20:19.56889943 +0000 UTC m=+1394.667872979" Jan 27 17:20:19 crc kubenswrapper[5049]: I0127 17:20:19.658784 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" path="/var/lib/kubelet/pods/44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5/volumes" Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.606534 5049 generic.go:334] "Generic (PLEG): container finished" podID="282879d1-6e20-4ce4-954f-1b081fda112e" containerID="479769d7e55cf6e1be82b7adf0530509c3928ffc4dfa3dd5cf8fd9ee86f43827" exitCode=0 Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.606758 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerDied","Data":"479769d7e55cf6e1be82b7adf0530509c3928ffc4dfa3dd5cf8fd9ee86f43827"} Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.792117 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.927877 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-sg-core-conf-yaml\") pod \"282879d1-6e20-4ce4-954f-1b081fda112e\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.927941 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-run-httpd\") pod \"282879d1-6e20-4ce4-954f-1b081fda112e\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.928005 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-ceilometer-tls-certs\") pod \"282879d1-6e20-4ce4-954f-1b081fda112e\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.928037 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-log-httpd\") pod \"282879d1-6e20-4ce4-954f-1b081fda112e\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.928056 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-scripts\") pod \"282879d1-6e20-4ce4-954f-1b081fda112e\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.928108 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-config-data\") pod \"282879d1-6e20-4ce4-954f-1b081fda112e\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.928128 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-combined-ca-bundle\") pod \"282879d1-6e20-4ce4-954f-1b081fda112e\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.928167 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc96s\" (UniqueName: \"kubernetes.io/projected/282879d1-6e20-4ce4-954f-1b081fda112e-kube-api-access-qc96s\") pod \"282879d1-6e20-4ce4-954f-1b081fda112e\" (UID: \"282879d1-6e20-4ce4-954f-1b081fda112e\") " Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.928528 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "282879d1-6e20-4ce4-954f-1b081fda112e" (UID: "282879d1-6e20-4ce4-954f-1b081fda112e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.929103 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "282879d1-6e20-4ce4-954f-1b081fda112e" (UID: "282879d1-6e20-4ce4-954f-1b081fda112e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.930073 5049 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.930107 5049 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/282879d1-6e20-4ce4-954f-1b081fda112e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.933701 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-scripts" (OuterVolumeSpecName: "scripts") pod "282879d1-6e20-4ce4-954f-1b081fda112e" (UID: "282879d1-6e20-4ce4-954f-1b081fda112e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.954359 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282879d1-6e20-4ce4-954f-1b081fda112e-kube-api-access-qc96s" (OuterVolumeSpecName: "kube-api-access-qc96s") pod "282879d1-6e20-4ce4-954f-1b081fda112e" (UID: "282879d1-6e20-4ce4-954f-1b081fda112e"). InnerVolumeSpecName "kube-api-access-qc96s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:22 crc kubenswrapper[5049]: I0127 17:20:22.960788 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "282879d1-6e20-4ce4-954f-1b081fda112e" (UID: "282879d1-6e20-4ce4-954f-1b081fda112e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.008532 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "282879d1-6e20-4ce4-954f-1b081fda112e" (UID: "282879d1-6e20-4ce4-954f-1b081fda112e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.014919 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "282879d1-6e20-4ce4-954f-1b081fda112e" (UID: "282879d1-6e20-4ce4-954f-1b081fda112e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.031637 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.031680 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc96s\" (UniqueName: \"kubernetes.io/projected/282879d1-6e20-4ce4-954f-1b081fda112e-kube-api-access-qc96s\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.031692 5049 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.031700 5049 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.031709 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.041485 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-config-data" (OuterVolumeSpecName: "config-data") pod "282879d1-6e20-4ce4-954f-1b081fda112e" (UID: "282879d1-6e20-4ce4-954f-1b081fda112e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.133039 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/282879d1-6e20-4ce4-954f-1b081fda112e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.618453 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"282879d1-6e20-4ce4-954f-1b081fda112e","Type":"ContainerDied","Data":"52f04c3046c513ea5283a1fe29963bebc834300cddf8e448414854a773339d46"} Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.618822 5049 scope.go:117] "RemoveContainer" containerID="f27661ef9801ac8af02d09341f0cbeefecfd032909c2927e72b3c5856291c9f8" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.618551 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.654175 5049 scope.go:117] "RemoveContainer" containerID="3d9e7ba960cceb878019684ddc73cf3da126a46c7a29be95e352a6f5f75ee022" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.673860 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.693152 5049 scope.go:117] "RemoveContainer" containerID="0fee396ff9988183407a9f48508bdc86941a951c0787478b5250b7d04a9d6e69" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.698865 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.712517 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:23 crc kubenswrapper[5049]: E0127 17:20:23.712924 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="ceilometer-notification-agent" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.712944 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="ceilometer-notification-agent" Jan 27 17:20:23 crc kubenswrapper[5049]: E0127 17:20:23.712953 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="proxy-httpd" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.712961 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="proxy-httpd" Jan 27 17:20:23 crc kubenswrapper[5049]: E0127 17:20:23.712971 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="ceilometer-central-agent" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.712977 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="ceilometer-central-agent" Jan 27 17:20:23 crc kubenswrapper[5049]: E0127 17:20:23.712997 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="sg-core" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.713003 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="sg-core" Jan 27 17:20:23 crc kubenswrapper[5049]: E0127 17:20:23.713020 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" containerName="init" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.713026 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" containerName="init" Jan 27 17:20:23 crc kubenswrapper[5049]: E0127 17:20:23.713034 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" containerName="dnsmasq-dns" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.713040 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" containerName="dnsmasq-dns" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.713200 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="sg-core" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.713209 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="44f4c3b9-6c89-488d-90b5-8ac2fca5e6a5" containerName="dnsmasq-dns" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.713222 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="ceilometer-notification-agent" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.713237 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="ceilometer-central-agent" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.713254 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" containerName="proxy-httpd" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.714808 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.723980 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.751049 5049 scope.go:117] "RemoveContainer" containerID="479769d7e55cf6e1be82b7adf0530509c3928ffc4dfa3dd5cf8fd9ee86f43827" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.751349 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.751445 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.751551 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.854811 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkptz\" (UniqueName: \"kubernetes.io/projected/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-kube-api-access-pkptz\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.855036 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-run-httpd\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.855183 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-scripts\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.855319 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-config-data\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.855416 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.855519 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-log-httpd\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.855616 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.855721 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.957746 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-scripts\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.957784 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-config-data\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.957805 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.957850 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-log-httpd\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.957884 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.957918 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.957965 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkptz\" (UniqueName: \"kubernetes.io/projected/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-kube-api-access-pkptz\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.957981 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-run-httpd\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.958324 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-run-httpd\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.959379 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-log-httpd\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.964511 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.964522 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.964752 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.972940 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-config-data\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.976403 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-scripts\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:23 crc kubenswrapper[5049]: I0127 17:20:23.979517 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkptz\" (UniqueName: \"kubernetes.io/projected/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-kube-api-access-pkptz\") pod \"ceilometer-0\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " pod="openstack/ceilometer-0" Jan 27 17:20:24 crc kubenswrapper[5049]: I0127 17:20:24.076646 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:20:24 crc kubenswrapper[5049]: I0127 17:20:24.530127 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:20:24 crc kubenswrapper[5049]: I0127 17:20:24.631254 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerStarted","Data":"910f229bd79131c626c319e3474800d8ecce82d34d630f88a2fe24a6791b0b08"} Jan 27 17:20:24 crc kubenswrapper[5049]: I0127 17:20:24.634856 5049 generic.go:334] "Generic (PLEG): container finished" podID="fb509f66-608c-454f-aef3-2c52323e916b" containerID="ed46c77a82d243a5da6e3709d5009665adaeef4af279b00988f34600ecd91dc7" exitCode=0 Jan 27 17:20:24 crc kubenswrapper[5049]: I0127 17:20:24.634916 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mcq6h" event={"ID":"fb509f66-608c-454f-aef3-2c52323e916b","Type":"ContainerDied","Data":"ed46c77a82d243a5da6e3709d5009665adaeef4af279b00988f34600ecd91dc7"} Jan 27 17:20:24 crc kubenswrapper[5049]: I0127 17:20:24.840523 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 17:20:24 crc kubenswrapper[5049]: I0127 17:20:24.841714 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 17:20:25 crc kubenswrapper[5049]: I0127 17:20:25.668975 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282879d1-6e20-4ce4-954f-1b081fda112e" path="/var/lib/kubelet/pods/282879d1-6e20-4ce4-954f-1b081fda112e/volumes" Jan 27 17:20:25 crc kubenswrapper[5049]: I0127 17:20:25.671526 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerStarted","Data":"0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69"} Jan 27 17:20:25 crc kubenswrapper[5049]: I0127 17:20:25.854867 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:20:25 crc kubenswrapper[5049]: I0127 17:20:25.854966 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.222983 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.307778 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgcmc\" (UniqueName: \"kubernetes.io/projected/fb509f66-608c-454f-aef3-2c52323e916b-kube-api-access-pgcmc\") pod \"fb509f66-608c-454f-aef3-2c52323e916b\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.307885 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-combined-ca-bundle\") pod \"fb509f66-608c-454f-aef3-2c52323e916b\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.308058 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-scripts\") pod \"fb509f66-608c-454f-aef3-2c52323e916b\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.308094 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-config-data\") pod \"fb509f66-608c-454f-aef3-2c52323e916b\" (UID: \"fb509f66-608c-454f-aef3-2c52323e916b\") " Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.312717 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb509f66-608c-454f-aef3-2c52323e916b-kube-api-access-pgcmc" (OuterVolumeSpecName: "kube-api-access-pgcmc") pod "fb509f66-608c-454f-aef3-2c52323e916b" (UID: "fb509f66-608c-454f-aef3-2c52323e916b"). InnerVolumeSpecName "kube-api-access-pgcmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.312914 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-scripts" (OuterVolumeSpecName: "scripts") pod "fb509f66-608c-454f-aef3-2c52323e916b" (UID: "fb509f66-608c-454f-aef3-2c52323e916b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.344116 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-config-data" (OuterVolumeSpecName: "config-data") pod "fb509f66-608c-454f-aef3-2c52323e916b" (UID: "fb509f66-608c-454f-aef3-2c52323e916b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.350006 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb509f66-608c-454f-aef3-2c52323e916b" (UID: "fb509f66-608c-454f-aef3-2c52323e916b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.409823 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.409855 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.409867 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb509f66-608c-454f-aef3-2c52323e916b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.409877 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgcmc\" (UniqueName: \"kubernetes.io/projected/fb509f66-608c-454f-aef3-2c52323e916b-kube-api-access-pgcmc\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.661878 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-mcq6h" event={"ID":"fb509f66-608c-454f-aef3-2c52323e916b","Type":"ContainerDied","Data":"16ef37e70a0c6202ef5f891d1ee995a5770cbad6c709abf4bf2c17bb19137d17"} Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.662110 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ef37e70a0c6202ef5f891d1ee995a5770cbad6c709abf4bf2c17bb19137d17" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.661949 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-mcq6h" Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.666639 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerStarted","Data":"89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71"} Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.835962 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.838695 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-api" containerID="cri-o://c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db" gracePeriod=30 Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.838972 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-log" containerID="cri-o://6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2" gracePeriod=30 Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.865786 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.866236 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-log" containerID="cri-o://c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78" gracePeriod=30 Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.866395 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-metadata" containerID="cri-o://37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac" gracePeriod=30 Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.882275 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:20:26 crc kubenswrapper[5049]: I0127 17:20:26.882478 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="07e07973-dfb3-40a3-b2b7-22a0e4afb32a" containerName="nova-scheduler-scheduler" containerID="cri-o://0cf29a2180a86eedfb4fb0028ef630d971b0aeb64233f1b8bb2459f9f116e820" gracePeriod=30 Jan 27 17:20:27 crc kubenswrapper[5049]: I0127 17:20:27.690200 5049 generic.go:334] "Generic (PLEG): container finished" podID="07e07973-dfb3-40a3-b2b7-22a0e4afb32a" containerID="0cf29a2180a86eedfb4fb0028ef630d971b0aeb64233f1b8bb2459f9f116e820" exitCode=0 Jan 27 17:20:27 crc kubenswrapper[5049]: I0127 17:20:27.690270 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07e07973-dfb3-40a3-b2b7-22a0e4afb32a","Type":"ContainerDied","Data":"0cf29a2180a86eedfb4fb0028ef630d971b0aeb64233f1b8bb2459f9f116e820"} Jan 27 17:20:27 crc kubenswrapper[5049]: I0127 17:20:27.703895 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerStarted","Data":"4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4"} Jan 27 17:20:27 crc kubenswrapper[5049]: I0127 17:20:27.712351 5049 generic.go:334] "Generic (PLEG): container finished" podID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerID="6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2" exitCode=143 Jan 27 17:20:27 crc kubenswrapper[5049]: I0127 17:20:27.712409 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f21bdd51-3e3c-476c-a746-812ab3df6fb5","Type":"ContainerDied","Data":"6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2"} Jan 27 17:20:27 crc kubenswrapper[5049]: I0127 17:20:27.717894 5049 generic.go:334] "Generic (PLEG): container finished" podID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerID="c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78" exitCode=143 Jan 27 17:20:27 crc kubenswrapper[5049]: I0127 17:20:27.717924 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"192a8418-7767-47e2-9171-a79a3a0c52e8","Type":"ContainerDied","Data":"c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78"} Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.029898 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.143253 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtsz2\" (UniqueName: \"kubernetes.io/projected/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-kube-api-access-vtsz2\") pod \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.143370 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-config-data\") pod \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.143437 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-combined-ca-bundle\") pod \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\" (UID: \"07e07973-dfb3-40a3-b2b7-22a0e4afb32a\") " Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.148974 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-kube-api-access-vtsz2" (OuterVolumeSpecName: "kube-api-access-vtsz2") pod "07e07973-dfb3-40a3-b2b7-22a0e4afb32a" (UID: "07e07973-dfb3-40a3-b2b7-22a0e4afb32a"). InnerVolumeSpecName "kube-api-access-vtsz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.250935 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtsz2\" (UniqueName: \"kubernetes.io/projected/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-kube-api-access-vtsz2\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.256375 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-config-data" (OuterVolumeSpecName: "config-data") pod "07e07973-dfb3-40a3-b2b7-22a0e4afb32a" (UID: "07e07973-dfb3-40a3-b2b7-22a0e4afb32a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.276794 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e07973-dfb3-40a3-b2b7-22a0e4afb32a" (UID: "07e07973-dfb3-40a3-b2b7-22a0e4afb32a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.352369 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.352402 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e07973-dfb3-40a3-b2b7-22a0e4afb32a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.727296 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07e07973-dfb3-40a3-b2b7-22a0e4afb32a","Type":"ContainerDied","Data":"8df0545a21386c0ce6b46fc774eb6b343956f8d97c83085adc5b832db4dfbd45"} Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.727333 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.727369 5049 scope.go:117] "RemoveContainer" containerID="0cf29a2180a86eedfb4fb0028ef630d971b0aeb64233f1b8bb2459f9f116e820" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.733460 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerStarted","Data":"699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9"} Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.733727 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.788960 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.654942386 podStartE2EDuration="5.788943296s" podCreationTimestamp="2026-01-27 17:20:23 +0000 UTC" firstStartedPulling="2026-01-27 17:20:24.53339742 +0000 UTC m=+1399.632371009" lastFinishedPulling="2026-01-27 17:20:27.66739837 +0000 UTC m=+1402.766371919" observedRunningTime="2026-01-27 17:20:28.760084647 +0000 UTC m=+1403.859058196" watchObservedRunningTime="2026-01-27 17:20:28.788943296 +0000 UTC m=+1403.887916845" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.798243 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.805502 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.813476 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:20:28 crc kubenswrapper[5049]: E0127 17:20:28.814427 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e07973-dfb3-40a3-b2b7-22a0e4afb32a" containerName="nova-scheduler-scheduler" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.814470 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e07973-dfb3-40a3-b2b7-22a0e4afb32a" containerName="nova-scheduler-scheduler" Jan 27 17:20:28 crc kubenswrapper[5049]: E0127 17:20:28.814536 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb509f66-608c-454f-aef3-2c52323e916b" containerName="nova-manage" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.814554 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb509f66-608c-454f-aef3-2c52323e916b" containerName="nova-manage" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.815039 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e07973-dfb3-40a3-b2b7-22a0e4afb32a" containerName="nova-scheduler-scheduler" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.815148 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb509f66-608c-454f-aef3-2c52323e916b" containerName="nova-manage" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.818929 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.821858 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.837235 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.870660 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-config-data\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.870711 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.870794 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtmc\" (UniqueName: \"kubernetes.io/projected/c9edd1d0-64dc-4c83-9149-04c772e4e517-kube-api-access-wgtmc\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.974688 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-config-data\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.974743 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.974812 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtmc\" (UniqueName: \"kubernetes.io/projected/c9edd1d0-64dc-4c83-9149-04c772e4e517-kube-api-access-wgtmc\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.980135 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.982388 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-config-data\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:28 crc kubenswrapper[5049]: I0127 17:20:28.990573 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtmc\" (UniqueName: \"kubernetes.io/projected/c9edd1d0-64dc-4c83-9149-04c772e4e517-kube-api-access-wgtmc\") pod \"nova-scheduler-0\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " pod="openstack/nova-scheduler-0" Jan 27 17:20:29 crc kubenswrapper[5049]: I0127 17:20:29.134038 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:20:29 crc kubenswrapper[5049]: I0127 17:20:29.603929 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:20:29 crc kubenswrapper[5049]: W0127 17:20:29.614704 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9edd1d0_64dc_4c83_9149_04c772e4e517.slice/crio-cbb55c61c4738bd07007615aba68d952479e48cb7685479dfa3aa7b3856bb820 WatchSource:0}: Error finding container cbb55c61c4738bd07007615aba68d952479e48cb7685479dfa3aa7b3856bb820: Status 404 returned error can't find the container with id cbb55c61c4738bd07007615aba68d952479e48cb7685479dfa3aa7b3856bb820 Jan 27 17:20:29 crc kubenswrapper[5049]: I0127 17:20:29.664778 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e07973-dfb3-40a3-b2b7-22a0e4afb32a" path="/var/lib/kubelet/pods/07e07973-dfb3-40a3-b2b7-22a0e4afb32a/volumes" Jan 27 17:20:29 crc kubenswrapper[5049]: I0127 17:20:29.752082 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9edd1d0-64dc-4c83-9149-04c772e4e517","Type":"ContainerStarted","Data":"cbb55c61c4738bd07007615aba68d952479e48cb7685479dfa3aa7b3856bb820"} Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.023838 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:43550->10.217.0.191:8775: read: connection reset by peer" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.023902 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.191:8775/\": read tcp 10.217.0.2:43560->10.217.0.191:8775: read: connection reset by peer" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.457191 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.502551 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192a8418-7767-47e2-9171-a79a3a0c52e8-logs\") pod \"192a8418-7767-47e2-9171-a79a3a0c52e8\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.502748 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-config-data\") pod \"192a8418-7767-47e2-9171-a79a3a0c52e8\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.502795 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74wdg\" (UniqueName: \"kubernetes.io/projected/192a8418-7767-47e2-9171-a79a3a0c52e8-kube-api-access-74wdg\") pod \"192a8418-7767-47e2-9171-a79a3a0c52e8\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.503725 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-nova-metadata-tls-certs\") pod \"192a8418-7767-47e2-9171-a79a3a0c52e8\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.503783 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-combined-ca-bundle\") pod \"192a8418-7767-47e2-9171-a79a3a0c52e8\" (UID: \"192a8418-7767-47e2-9171-a79a3a0c52e8\") " Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.504841 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a8418-7767-47e2-9171-a79a3a0c52e8-logs" (OuterVolumeSpecName: "logs") pod "192a8418-7767-47e2-9171-a79a3a0c52e8" (UID: "192a8418-7767-47e2-9171-a79a3a0c52e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.525184 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a8418-7767-47e2-9171-a79a3a0c52e8-kube-api-access-74wdg" (OuterVolumeSpecName: "kube-api-access-74wdg") pod "192a8418-7767-47e2-9171-a79a3a0c52e8" (UID: "192a8418-7767-47e2-9171-a79a3a0c52e8"). InnerVolumeSpecName "kube-api-access-74wdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.555233 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-config-data" (OuterVolumeSpecName: "config-data") pod "192a8418-7767-47e2-9171-a79a3a0c52e8" (UID: "192a8418-7767-47e2-9171-a79a3a0c52e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.590799 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "192a8418-7767-47e2-9171-a79a3a0c52e8" (UID: "192a8418-7767-47e2-9171-a79a3a0c52e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.606289 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.606326 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/192a8418-7767-47e2-9171-a79a3a0c52e8-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.606337 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.606347 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74wdg\" (UniqueName: \"kubernetes.io/projected/192a8418-7767-47e2-9171-a79a3a0c52e8-kube-api-access-74wdg\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.612873 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "192a8418-7767-47e2-9171-a79a3a0c52e8" (UID: "192a8418-7767-47e2-9171-a79a3a0c52e8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.708212 5049 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/192a8418-7767-47e2-9171-a79a3a0c52e8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.765871 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9edd1d0-64dc-4c83-9149-04c772e4e517","Type":"ContainerStarted","Data":"c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362"} Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.769238 5049 generic.go:334] "Generic (PLEG): container finished" podID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerID="37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac" exitCode=0 Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.769318 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.769499 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"192a8418-7767-47e2-9171-a79a3a0c52e8","Type":"ContainerDied","Data":"37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac"} Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.769635 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"192a8418-7767-47e2-9171-a79a3a0c52e8","Type":"ContainerDied","Data":"7fa251fe134fa8de7ad16d127f6de5305cd69e867cadbb2c545cb4e77b2ddc5c"} Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.769710 5049 scope.go:117] "RemoveContainer" containerID="37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.789838 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7898141990000003 podStartE2EDuration="2.789814199s" podCreationTimestamp="2026-01-27 17:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:20:30.781917392 +0000 UTC m=+1405.880890981" watchObservedRunningTime="2026-01-27 17:20:30.789814199 +0000 UTC m=+1405.888787748" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.813602 5049 scope.go:117] "RemoveContainer" containerID="c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.815315 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.824485 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.850403 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:20:30 crc kubenswrapper[5049]: E0127 17:20:30.850843 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-metadata" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.850857 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-metadata" Jan 27 17:20:30 crc kubenswrapper[5049]: E0127 17:20:30.850874 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-log" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.850883 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-log" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.851065 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-log" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.851082 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" containerName="nova-metadata-metadata" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.852048 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.855243 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.855464 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.871829 5049 scope.go:117] "RemoveContainer" containerID="37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac" Jan 27 17:20:30 crc kubenswrapper[5049]: E0127 17:20:30.880968 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac\": container with ID starting with 37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac not found: ID does not exist" containerID="37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.881025 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac"} err="failed to get container status \"37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac\": rpc error: code = NotFound desc = could not find container \"37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac\": container with ID starting with 37e7dff6e688a75d35d9f2116928ce958e18fdb1136880574686d34787e66aac not found: ID does not exist" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.881050 5049 scope.go:117] "RemoveContainer" containerID="c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.883227 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:20:30 crc kubenswrapper[5049]: E0127 17:20:30.889976 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78\": container with ID starting with c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78 not found: ID does not exist" containerID="c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.890019 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78"} err="failed to get container status \"c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78\": rpc error: code = NotFound desc = could not find container \"c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78\": container with ID starting with c1c4ab174dc6e57cad518e960e94c8342e1268c2c7e9c98566eb336aae1cfd78 not found: ID does not exist" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.912019 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kghhw\" (UniqueName: \"kubernetes.io/projected/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-kube-api-access-kghhw\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.912484 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.912558 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-config-data\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.912643 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:30 crc kubenswrapper[5049]: I0127 17:20:30.912727 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-logs\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.014727 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kghhw\" (UniqueName: \"kubernetes.io/projected/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-kube-api-access-kghhw\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.014887 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.014925 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-config-data\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.014998 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.015041 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-logs\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.015569 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-logs\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.019013 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.019220 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.024411 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-config-data\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.037000 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kghhw\" (UniqueName: \"kubernetes.io/projected/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-kube-api-access-kghhw\") pod \"nova-metadata-0\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.186111 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.666666 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192a8418-7767-47e2-9171-a79a3a0c52e8" path="/var/lib/kubelet/pods/192a8418-7767-47e2-9171-a79a3a0c52e8/volumes" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.700597 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:20:31 crc kubenswrapper[5049]: W0127 17:20:31.703915 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65eb2d0b_ab1a_4a97_afdc_73592ac6cb29.slice/crio-b53278d8f9c6efdb8556caa5ee12eae569f1fa47705673087d97b645d1898d41 WatchSource:0}: Error finding container b53278d8f9c6efdb8556caa5ee12eae569f1fa47705673087d97b645d1898d41: Status 404 returned error can't find the container with id b53278d8f9c6efdb8556caa5ee12eae569f1fa47705673087d97b645d1898d41 Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.737286 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.784063 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29","Type":"ContainerStarted","Data":"b53278d8f9c6efdb8556caa5ee12eae569f1fa47705673087d97b645d1898d41"} Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.785715 5049 generic.go:334] "Generic (PLEG): container finished" podID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerID="c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db" exitCode=0 Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.785831 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f21bdd51-3e3c-476c-a746-812ab3df6fb5","Type":"ContainerDied","Data":"c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db"} Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.785861 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f21bdd51-3e3c-476c-a746-812ab3df6fb5","Type":"ContainerDied","Data":"4d54283e3b713e29c0c557687850cb68251f702a02654fd09548b1c05f66d551"} Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.785883 5049 scope.go:117] "RemoveContainer" containerID="c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.785983 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.810268 5049 scope.go:117] "RemoveContainer" containerID="6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.828653 5049 scope.go:117] "RemoveContainer" containerID="c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db" Jan 27 17:20:31 crc kubenswrapper[5049]: E0127 17:20:31.828987 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db\": container with ID starting with c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db not found: ID does not exist" containerID="c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.829021 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db"} err="failed to get container status \"c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db\": rpc error: code = NotFound desc = could not find container \"c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db\": container with ID starting with c9a87f7b31abf0ffaf8e0139d8e3c912c15b1ce164893a4f1ceb46bc7797d1db not found: ID does not exist" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.829042 5049 scope.go:117] "RemoveContainer" containerID="6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2" Jan 27 17:20:31 crc kubenswrapper[5049]: E0127 17:20:31.829241 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2\": container with ID starting with 6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2 not found: ID does not exist" containerID="6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.829260 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2"} err="failed to get container status \"6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2\": rpc error: code = NotFound desc = could not find container \"6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2\": container with ID starting with 6d0d32a7f806ba5cc6ab4c564b1f82a685d048c24c0f8dc5ba479960835c7bc2 not found: ID does not exist" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.842459 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89wkk\" (UniqueName: \"kubernetes.io/projected/f21bdd51-3e3c-476c-a746-812ab3df6fb5-kube-api-access-89wkk\") pod \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.842562 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-combined-ca-bundle\") pod \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.842764 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f21bdd51-3e3c-476c-a746-812ab3df6fb5-logs\") pod \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.842843 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-config-data\") pod \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.842945 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-public-tls-certs\") pod \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.842978 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-internal-tls-certs\") pod \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\" (UID: \"f21bdd51-3e3c-476c-a746-812ab3df6fb5\") " Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.843934 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f21bdd51-3e3c-476c-a746-812ab3df6fb5-logs" (OuterVolumeSpecName: "logs") pod "f21bdd51-3e3c-476c-a746-812ab3df6fb5" (UID: "f21bdd51-3e3c-476c-a746-812ab3df6fb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.849016 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21bdd51-3e3c-476c-a746-812ab3df6fb5-kube-api-access-89wkk" (OuterVolumeSpecName: "kube-api-access-89wkk") pod "f21bdd51-3e3c-476c-a746-812ab3df6fb5" (UID: "f21bdd51-3e3c-476c-a746-812ab3df6fb5"). InnerVolumeSpecName "kube-api-access-89wkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.869497 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f21bdd51-3e3c-476c-a746-812ab3df6fb5" (UID: "f21bdd51-3e3c-476c-a746-812ab3df6fb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.875235 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-config-data" (OuterVolumeSpecName: "config-data") pod "f21bdd51-3e3c-476c-a746-812ab3df6fb5" (UID: "f21bdd51-3e3c-476c-a746-812ab3df6fb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.906248 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f21bdd51-3e3c-476c-a746-812ab3df6fb5" (UID: "f21bdd51-3e3c-476c-a746-812ab3df6fb5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.908988 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f21bdd51-3e3c-476c-a746-812ab3df6fb5" (UID: "f21bdd51-3e3c-476c-a746-812ab3df6fb5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.945410 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.945694 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f21bdd51-3e3c-476c-a746-812ab3df6fb5-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.945704 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.945712 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.945720 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21bdd51-3e3c-476c-a746-812ab3df6fb5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:31 crc kubenswrapper[5049]: I0127 17:20:31.945728 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89wkk\" (UniqueName: \"kubernetes.io/projected/f21bdd51-3e3c-476c-a746-812ab3df6fb5-kube-api-access-89wkk\") on node \"crc\" DevicePath \"\"" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.142734 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.152554 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.169424 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:32 crc kubenswrapper[5049]: E0127 17:20:32.170056 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-api" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.170198 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-api" Jan 27 17:20:32 crc kubenswrapper[5049]: E0127 17:20:32.170280 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-log" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.170337 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-log" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.170602 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-api" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.170723 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" containerName="nova-api-log" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.171706 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.173641 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.185419 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.185893 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.186070 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.250718 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-config-data\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.250806 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqnp\" (UniqueName: \"kubernetes.io/projected/294e84c0-d49f-4e45-87d5-085c7accf51e-kube-api-access-clqnp\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.250994 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.251112 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.251147 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-public-tls-certs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.251261 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/294e84c0-d49f-4e45-87d5-085c7accf51e-logs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.353539 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.353613 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.353633 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-public-tls-certs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.353697 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/294e84c0-d49f-4e45-87d5-085c7accf51e-logs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.353725 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-config-data\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.353772 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqnp\" (UniqueName: \"kubernetes.io/projected/294e84c0-d49f-4e45-87d5-085c7accf51e-kube-api-access-clqnp\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.354496 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/294e84c0-d49f-4e45-87d5-085c7accf51e-logs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.358887 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-public-tls-certs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.358959 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-config-data\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.362433 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.372492 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqnp\" (UniqueName: \"kubernetes.io/projected/294e84c0-d49f-4e45-87d5-085c7accf51e-kube-api-access-clqnp\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.386414 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.510730 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.796935 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29","Type":"ContainerStarted","Data":"a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a"} Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.797246 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29","Type":"ContainerStarted","Data":"1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2"} Jan 27 17:20:32 crc kubenswrapper[5049]: I0127 17:20:32.817271 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.817246075 podStartE2EDuration="2.817246075s" podCreationTimestamp="2026-01-27 17:20:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:20:32.814461595 +0000 UTC m=+1407.913435164" watchObservedRunningTime="2026-01-27 17:20:32.817246075 +0000 UTC m=+1407.916219664" Jan 27 17:20:33 crc kubenswrapper[5049]: I0127 17:20:33.005155 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:20:33 crc kubenswrapper[5049]: I0127 17:20:33.658992 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21bdd51-3e3c-476c-a746-812ab3df6fb5" path="/var/lib/kubelet/pods/f21bdd51-3e3c-476c-a746-812ab3df6fb5/volumes" Jan 27 17:20:33 crc kubenswrapper[5049]: I0127 17:20:33.810907 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"294e84c0-d49f-4e45-87d5-085c7accf51e","Type":"ContainerStarted","Data":"b7a20b9de92877a9ab934476ffa27a2a939c104c0bb643b1807fc727b4746d30"} Jan 27 17:20:33 crc kubenswrapper[5049]: I0127 17:20:33.810956 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"294e84c0-d49f-4e45-87d5-085c7accf51e","Type":"ContainerStarted","Data":"cbe71e694f563bfe04548dc1dfb37796b16b1852241671e8d1a4cc3caf1b96a2"} Jan 27 17:20:33 crc kubenswrapper[5049]: I0127 17:20:33.810974 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"294e84c0-d49f-4e45-87d5-085c7accf51e","Type":"ContainerStarted","Data":"5dbcff7b980f7f21cf78323dd18879804f6ca1ad81096ee5fdb4515233ff6492"} Jan 27 17:20:33 crc kubenswrapper[5049]: I0127 17:20:33.845636 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.8456027640000001 podStartE2EDuration="1.845602764s" podCreationTimestamp="2026-01-27 17:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:20:33.830924092 +0000 UTC m=+1408.929897641" watchObservedRunningTime="2026-01-27 17:20:33.845602764 +0000 UTC m=+1408.944576343" Jan 27 17:20:34 crc kubenswrapper[5049]: I0127 17:20:34.134907 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 17:20:36 crc kubenswrapper[5049]: I0127 17:20:36.187148 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 17:20:36 crc kubenswrapper[5049]: I0127 17:20:36.187526 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 17:20:39 crc kubenswrapper[5049]: I0127 17:20:39.134311 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 17:20:39 crc kubenswrapper[5049]: I0127 17:20:39.159197 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 17:20:39 crc kubenswrapper[5049]: I0127 17:20:39.908302 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 17:20:41 crc kubenswrapper[5049]: I0127 17:20:41.187133 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 17:20:41 crc kubenswrapper[5049]: I0127 17:20:41.187195 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 17:20:42 crc kubenswrapper[5049]: I0127 17:20:42.202864 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:20:42 crc kubenswrapper[5049]: I0127 17:20:42.202883 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:20:42 crc kubenswrapper[5049]: I0127 17:20:42.512299 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 17:20:42 crc kubenswrapper[5049]: I0127 17:20:42.512363 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 17:20:43 crc kubenswrapper[5049]: I0127 17:20:43.524899 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:20:43 crc kubenswrapper[5049]: I0127 17:20:43.524977 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:20:47 crc kubenswrapper[5049]: I0127 17:20:47.781578 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:20:47 crc kubenswrapper[5049]: I0127 17:20:47.782484 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:20:51 crc kubenswrapper[5049]: I0127 17:20:51.195617 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 17:20:51 crc kubenswrapper[5049]: I0127 17:20:51.198408 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 17:20:51 crc kubenswrapper[5049]: I0127 17:20:51.203487 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 17:20:52 crc kubenswrapper[5049]: I0127 17:20:52.013226 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 17:20:52 crc kubenswrapper[5049]: I0127 17:20:52.521214 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 17:20:52 crc kubenswrapper[5049]: I0127 17:20:52.521827 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 17:20:52 crc kubenswrapper[5049]: I0127 17:20:52.526754 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 17:20:52 crc kubenswrapper[5049]: I0127 17:20:52.529096 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 17:20:53 crc kubenswrapper[5049]: I0127 17:20:53.020121 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 17:20:53 crc kubenswrapper[5049]: I0127 17:20:53.024971 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 17:20:54 crc kubenswrapper[5049]: I0127 17:20:54.086156 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.054131 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9swgr"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.055614 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.079384 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46tr\" (UniqueName: \"kubernetes.io/projected/f2cc976d-73bd-4d16-a1f6-84108954384f-kube-api-access-s46tr\") pod \"root-account-create-update-9swgr\" (UID: \"f2cc976d-73bd-4d16-a1f6-84108954384f\") " pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.079452 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts\") pod \"root-account-create-update-9swgr\" (UID: \"f2cc976d-73bd-4d16-a1f6-84108954384f\") " pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.081766 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.116569 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9swgr"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.163890 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-575b565ff8-wcjw4"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.165913 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.177251 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c9f58f99c-tq7mf"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.178743 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.180728 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-combined-ca-bundle\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.180768 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.180792 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55f335e-88f4-4e41-a177-0771cfd532c4-logs\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.180830 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.180856 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bs9h\" (UniqueName: \"kubernetes.io/projected/e55f335e-88f4-4e41-a177-0771cfd532c4-kube-api-access-8bs9h\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.180899 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s46tr\" (UniqueName: \"kubernetes.io/projected/f2cc976d-73bd-4d16-a1f6-84108954384f-kube-api-access-s46tr\") pod \"root-account-create-update-9swgr\" (UID: \"f2cc976d-73bd-4d16-a1f6-84108954384f\") " pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.180922 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k28s7\" (UniqueName: \"kubernetes.io/projected/7b36a6d6-32ec-4c02-b274-319cb860222c-kube-api-access-k28s7\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.181304 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts\") pod \"root-account-create-update-9swgr\" (UID: \"f2cc976d-73bd-4d16-a1f6-84108954384f\") " pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.181395 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data-custom\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.181431 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b36a6d6-32ec-4c02-b274-319cb860222c-logs\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.181474 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data-custom\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.181528 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-combined-ca-bundle\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.182318 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts\") pod \"root-account-create-update-9swgr\" (UID: \"f2cc976d-73bd-4d16-a1f6-84108954384f\") " pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.191117 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-575b565ff8-wcjw4"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.227743 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c9f58f99c-tq7mf"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.233938 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46tr\" (UniqueName: \"kubernetes.io/projected/f2cc976d-73bd-4d16-a1f6-84108954384f-kube-api-access-s46tr\") pod \"root-account-create-update-9swgr\" (UID: \"f2cc976d-73bd-4d16-a1f6-84108954384f\") " pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.275334 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zr447"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283377 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k28s7\" (UniqueName: \"kubernetes.io/projected/7b36a6d6-32ec-4c02-b274-319cb860222c-kube-api-access-k28s7\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283473 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data-custom\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283519 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b36a6d6-32ec-4c02-b274-319cb860222c-logs\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283582 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data-custom\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283651 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-combined-ca-bundle\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283698 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-combined-ca-bundle\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283718 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283737 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55f335e-88f4-4e41-a177-0771cfd532c4-logs\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283792 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.283824 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bs9h\" (UniqueName: \"kubernetes.io/projected/e55f335e-88f4-4e41-a177-0771cfd532c4-kube-api-access-8bs9h\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.284548 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b36a6d6-32ec-4c02-b274-319cb860222c-logs\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.285126 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55f335e-88f4-4e41-a177-0771cfd532c4-logs\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.291783 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-combined-ca-bundle\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.300732 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-combined-ca-bundle\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.301569 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data-custom\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.307210 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data-custom\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.309816 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.310796 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.331373 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k28s7\" (UniqueName: \"kubernetes.io/projected/7b36a6d6-32ec-4c02-b274-319cb860222c-kube-api-access-k28s7\") pod \"barbican-keystone-listener-575b565ff8-wcjw4\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.366750 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zr447"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.379148 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.380468 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bs9h\" (UniqueName: \"kubernetes.io/projected/e55f335e-88f4-4e41-a177-0771cfd532c4-kube-api-access-8bs9h\") pod \"barbican-worker-c9f58f99c-tq7mf\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.398327 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-37a8-account-create-update-zmbd7"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.407680 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.416938 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.452140 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-37a8-account-create-update-zmbd7"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.481551 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85447fcffb-gb5mq"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.495168 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.496066 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.520943 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d1fd-account-create-update-nvs8x"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.522271 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.523208 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.536775 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.563283 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.571733 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8ec5-account-create-update-dwp4c"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.572934 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.591240 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.591502 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-operator-scripts\") pod \"cinder-37a8-account-create-update-zmbd7\" (UID: \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\") " pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.591600 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsq5s\" (UniqueName: \"kubernetes.io/projected/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-kube-api-access-nsq5s\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.591703 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data-custom\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.591794 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6wwc\" (UniqueName: \"kubernetes.io/projected/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-kube-api-access-g6wwc\") pod \"cinder-37a8-account-create-update-zmbd7\" (UID: \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\") " pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.591910 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-combined-ca-bundle\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.592024 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-logs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.592159 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-internal-tls-certs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.592266 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-public-tls-certs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.596288 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.613284 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85447fcffb-gb5mq"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.637797 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d1fd-account-create-update-nvs8x"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.694774 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data-custom\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.694808 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpk6l\" (UniqueName: \"kubernetes.io/projected/90792618-1456-45fe-9249-d31ad3b1a682-kube-api-access-jpk6l\") pod \"glance-d1fd-account-create-update-nvs8x\" (UID: \"90792618-1456-45fe-9249-d31ad3b1a682\") " pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.694833 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6wwc\" (UniqueName: \"kubernetes.io/projected/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-kube-api-access-g6wwc\") pod \"cinder-37a8-account-create-update-zmbd7\" (UID: \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\") " pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.694909 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-combined-ca-bundle\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.694945 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-logs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.694972 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90792618-1456-45fe-9249-d31ad3b1a682-operator-scripts\") pod \"glance-d1fd-account-create-update-nvs8x\" (UID: \"90792618-1456-45fe-9249-d31ad3b1a682\") " pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.695015 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-internal-tls-certs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.695037 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-public-tls-certs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.695070 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a74888-1276-4ef7-95f7-939c1df326b6-operator-scripts\") pod \"placement-8ec5-account-create-update-dwp4c\" (UID: \"45a74888-1276-4ef7-95f7-939c1df326b6\") " pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.695108 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.695157 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-operator-scripts\") pod \"cinder-37a8-account-create-update-zmbd7\" (UID: \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\") " pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.695189 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsq5s\" (UniqueName: \"kubernetes.io/projected/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-kube-api-access-nsq5s\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.695209 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlhpc\" (UniqueName: \"kubernetes.io/projected/45a74888-1276-4ef7-95f7-939c1df326b6-kube-api-access-qlhpc\") pod \"placement-8ec5-account-create-update-dwp4c\" (UID: \"45a74888-1276-4ef7-95f7-939c1df326b6\") " pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:17 crc kubenswrapper[5049]: E0127 17:21:17.699023 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 17:21:17 crc kubenswrapper[5049]: E0127 17:21:17.699072 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data podName:62ffcfe9-3e93-48ee-8d03-9b653d1bfede nodeName:}" failed. No retries permitted until 2026-01-27 17:21:18.1990557 +0000 UTC m=+1453.298029249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data") pod "rabbitmq-server-0" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede") : configmap "rabbitmq-config-data" not found Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.700848 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b09562c-f4c4-425a-a400-113e913a8031" path="/var/lib/kubelet/pods/7b09562c-f4c4-425a-a400-113e913a8031/volumes" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.703346 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.712003 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.704803 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-operator-scripts\") pod \"cinder-37a8-account-create-update-zmbd7\" (UID: \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\") " pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.704237 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-logs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.712568 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="467be3c3-34b2-4cea-8785-bacf5a6a5a39" containerName="openstackclient" containerID="cri-o://d37d30713855c20058ddaa0bf88d078a2270dcf3c57898ea717f889e47119dd9" gracePeriod=2 Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.724296 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.730195 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-public-tls-certs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.731331 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data-custom\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.731522 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-internal-tls-certs\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.736506 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-combined-ca-bundle\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.742777 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-37a8-account-create-update-2pxzq"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.747919 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6wwc\" (UniqueName: \"kubernetes.io/projected/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-kube-api-access-g6wwc\") pod \"cinder-37a8-account-create-update-zmbd7\" (UID: \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\") " pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.774089 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsq5s\" (UniqueName: \"kubernetes.io/projected/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-kube-api-access-nsq5s\") pod \"barbican-api-85447fcffb-gb5mq\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.774426 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-37a8-account-create-update-2pxzq"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.793447 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.793500 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.794320 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.797662 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlhpc\" (UniqueName: \"kubernetes.io/projected/45a74888-1276-4ef7-95f7-939c1df326b6-kube-api-access-qlhpc\") pod \"placement-8ec5-account-create-update-dwp4c\" (UID: \"45a74888-1276-4ef7-95f7-939c1df326b6\") " pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.797728 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpk6l\" (UniqueName: \"kubernetes.io/projected/90792618-1456-45fe-9249-d31ad3b1a682-kube-api-access-jpk6l\") pod \"glance-d1fd-account-create-update-nvs8x\" (UID: \"90792618-1456-45fe-9249-d31ad3b1a682\") " pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.797897 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90792618-1456-45fe-9249-d31ad3b1a682-operator-scripts\") pod \"glance-d1fd-account-create-update-nvs8x\" (UID: \"90792618-1456-45fe-9249-d31ad3b1a682\") " pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.797978 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a74888-1276-4ef7-95f7-939c1df326b6-operator-scripts\") pod \"placement-8ec5-account-create-update-dwp4c\" (UID: \"45a74888-1276-4ef7-95f7-939c1df326b6\") " pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.798691 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a74888-1276-4ef7-95f7-939c1df326b6-operator-scripts\") pod \"placement-8ec5-account-create-update-dwp4c\" (UID: \"45a74888-1276-4ef7-95f7-939c1df326b6\") " pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.806388 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90792618-1456-45fe-9249-d31ad3b1a682-operator-scripts\") pod \"glance-d1fd-account-create-update-nvs8x\" (UID: \"90792618-1456-45fe-9249-d31ad3b1a682\") " pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.866210 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.867887 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlhpc\" (UniqueName: \"kubernetes.io/projected/45a74888-1276-4ef7-95f7-939c1df326b6-kube-api-access-qlhpc\") pod \"placement-8ec5-account-create-update-dwp4c\" (UID: \"45a74888-1276-4ef7-95f7-939c1df326b6\") " pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.887302 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpk6l\" (UniqueName: \"kubernetes.io/projected/90792618-1456-45fe-9249-d31ad3b1a682-kube-api-access-jpk6l\") pod \"glance-d1fd-account-create-update-nvs8x\" (UID: \"90792618-1456-45fe-9249-d31ad3b1a682\") " pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.922587 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8ec5-account-create-update-dwp4c"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.953412 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.977164 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d1fd-account-create-update-c6g2d"] Jan 27 17:21:17 crc kubenswrapper[5049]: I0127 17:21:17.999789 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d1fd-account-create-update-c6g2d"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.022806 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-973c-account-create-update-bz54b"] Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.023209 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467be3c3-34b2-4cea-8785-bacf5a6a5a39" containerName="openstackclient" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.023220 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="467be3c3-34b2-4cea-8785-bacf5a6a5a39" containerName="openstackclient" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.023406 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="467be3c3-34b2-4cea-8785-bacf5a6a5a39" containerName="openstackclient" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.024419 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.044866 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.058310 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.058697 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="openstack-network-exporter" containerID="cri-o://5662e99c2eaeb51406aed793385fad5230b1d5921534ab4909d10f8d999bf0f2" gracePeriod=300 Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.109337 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-973c-account-create-update-bz54b"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.124926 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-operator-scripts\") pod \"barbican-973c-account-create-update-bz54b\" (UID: \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\") " pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.125003 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6spr\" (UniqueName: \"kubernetes.io/projected/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-kube-api-access-g6spr\") pod \"barbican-973c-account-create-update-bz54b\" (UID: \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\") " pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.161307 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d77-account-create-update-wxtnc"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.162800 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.175579 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.200970 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.207158 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d77-account-create-update-wxtnc"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.226616 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e35d2b53-2aed-4405-b61e-abe411cb3b42-operator-scripts\") pod \"neutron-6d77-account-create-update-wxtnc\" (UID: \"e35d2b53-2aed-4405-b61e-abe411cb3b42\") " pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.228995 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.229076 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data podName:62ffcfe9-3e93-48ee-8d03-9b653d1bfede nodeName:}" failed. No retries permitted until 2026-01-27 17:21:19.229044424 +0000 UTC m=+1454.328018033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data") pod "rabbitmq-server-0" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede") : configmap "rabbitmq-config-data" not found Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.226663 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-operator-scripts\") pod \"barbican-973c-account-create-update-bz54b\" (UID: \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\") " pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.230403 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tsf7\" (UniqueName: \"kubernetes.io/projected/e35d2b53-2aed-4405-b61e-abe411cb3b42-kube-api-access-2tsf7\") pod \"neutron-6d77-account-create-update-wxtnc\" (UID: \"e35d2b53-2aed-4405-b61e-abe411cb3b42\") " pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.230475 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6spr\" (UniqueName: \"kubernetes.io/projected/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-kube-api-access-g6spr\") pod \"barbican-973c-account-create-update-bz54b\" (UID: \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\") " pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.231784 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-operator-scripts\") pod \"barbican-973c-account-create-update-bz54b\" (UID: \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\") " pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.276730 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8ec5-account-create-update-gdhjj"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.294353 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8ec5-account-create-update-gdhjj"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.306649 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.314906 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d95d-account-create-update-5h486"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.316208 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.323977 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.324106 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6spr\" (UniqueName: \"kubernetes.io/projected/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-kube-api-access-g6spr\") pod \"barbican-973c-account-create-update-bz54b\" (UID: \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\") " pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.328058 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d95d-account-create-update-5h486"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.333517 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e35d2b53-2aed-4405-b61e-abe411cb3b42-operator-scripts\") pod \"neutron-6d77-account-create-update-wxtnc\" (UID: \"e35d2b53-2aed-4405-b61e-abe411cb3b42\") " pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.333589 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tsf7\" (UniqueName: \"kubernetes.io/projected/e35d2b53-2aed-4405-b61e-abe411cb3b42-kube-api-access-2tsf7\") pod \"neutron-6d77-account-create-update-wxtnc\" (UID: \"e35d2b53-2aed-4405-b61e-abe411cb3b42\") " pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.334596 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e35d2b53-2aed-4405-b61e-abe411cb3b42-operator-scripts\") pod \"neutron-6d77-account-create-update-wxtnc\" (UID: \"e35d2b53-2aed-4405-b61e-abe411cb3b42\") " pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.337764 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-973c-account-create-update-vlmrw"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.347812 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-973c-account-create-update-vlmrw"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.361933 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d77-account-create-update-pf4g2"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.372525 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tsf7\" (UniqueName: \"kubernetes.io/projected/e35d2b53-2aed-4405-b61e-abe411cb3b42-kube-api-access-2tsf7\") pod \"neutron-6d77-account-create-update-wxtnc\" (UID: \"e35d2b53-2aed-4405-b61e-abe411cb3b42\") " pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.372591 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d77-account-create-update-pf4g2"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.386369 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.386955 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-80af-account-create-update-f778m"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.388257 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.393129 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.411874 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-f778m"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.421402 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="ovsdbserver-sb" containerID="cri-o://c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79" gracePeriod=300 Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.424954 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d95d-account-create-update-ggmht"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.438702 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dqp4j"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.444362 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81cf45aa-76f9-41d4-9385-7796174601b0-operator-scripts\") pod \"nova-cell0-80af-account-create-update-f778m\" (UID: \"81cf45aa-76f9-41d4-9385-7796174601b0\") " pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.444476 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-operator-scripts\") pod \"nova-api-d95d-account-create-update-5h486\" (UID: \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\") " pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.444536 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwbrk\" (UniqueName: \"kubernetes.io/projected/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-kube-api-access-cwbrk\") pod \"nova-api-d95d-account-create-update-5h486\" (UID: \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\") " pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.444565 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb5b\" (UniqueName: \"kubernetes.io/projected/81cf45aa-76f9-41d4-9385-7796174601b0-kube-api-access-8cb5b\") pod \"nova-cell0-80af-account-create-update-f778m\" (UID: \"81cf45aa-76f9-41d4-9385-7796174601b0\") " pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.444851 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.444934 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data podName:dbb24b4b-dfbd-431f-8244-098c40f7c24f nodeName:}" failed. No retries permitted until 2026-01-27 17:21:18.944914145 +0000 UTC m=+1454.043887694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data") pod "rabbitmq-cell1-server-0" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f") : configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.452119 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d95d-account-create-update-ggmht"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.463483 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.464804 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="ovn-northd" containerID="cri-o://ffdb84acf31942996807c242b98114c9c8d67e2eeaa568117f878ad3675f41d8" gracePeriod=30 Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.465181 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="openstack-network-exporter" containerID="cri-o://bc748ff2fbd71fb24f80f8b730d7367d5fd71e407cbaf62490be6b914c76b0a8" gracePeriod=30 Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.485141 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dqp4j"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.509178 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-544e-account-create-update-6qhf4"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.519302 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.521239 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-544e-account-create-update-6qhf4"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.522128 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.541773 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hl7hg"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.546000 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cb5b\" (UniqueName: \"kubernetes.io/projected/81cf45aa-76f9-41d4-9385-7796174601b0-kube-api-access-8cb5b\") pod \"nova-cell0-80af-account-create-update-f778m\" (UID: \"81cf45aa-76f9-41d4-9385-7796174601b0\") " pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.549953 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81cf45aa-76f9-41d4-9385-7796174601b0-operator-scripts\") pod \"nova-cell0-80af-account-create-update-f778m\" (UID: \"81cf45aa-76f9-41d4-9385-7796174601b0\") " pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.550194 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-operator-scripts\") pod \"nova-api-d95d-account-create-update-5h486\" (UID: \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\") " pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.550302 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwbrk\" (UniqueName: \"kubernetes.io/projected/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-kube-api-access-cwbrk\") pod \"nova-api-d95d-account-create-update-5h486\" (UID: \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\") " pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.553588 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-operator-scripts\") pod \"nova-api-d95d-account-create-update-5h486\" (UID: \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\") " pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.553712 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-9lhmg"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.564404 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-9lhmg"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.567893 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81cf45aa-76f9-41d4-9385-7796174601b0-operator-scripts\") pod \"nova-cell0-80af-account-create-update-f778m\" (UID: \"81cf45aa-76f9-41d4-9385-7796174601b0\") " pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.579805 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hl7hg"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.621590 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwbrk\" (UniqueName: \"kubernetes.io/projected/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-kube-api-access-cwbrk\") pod \"nova-api-d95d-account-create-update-5h486\" (UID: \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\") " pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.622276 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cb5b\" (UniqueName: \"kubernetes.io/projected/81cf45aa-76f9-41d4-9385-7796174601b0-kube-api-access-8cb5b\") pod \"nova-cell0-80af-account-create-update-f778m\" (UID: \"81cf45aa-76f9-41d4-9385-7796174601b0\") " pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.644860 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dhqbn"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.686898 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5chqz\" (UniqueName: \"kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.687176 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.736010 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.748270 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dhqbn"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.759065 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-544e-account-create-update-hnhlj"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.775066 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-544e-account-create-update-hnhlj"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.784941 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.804977 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5chqz\" (UniqueName: \"kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.805414 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.806735 5049 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.806776 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts podName:7472ee77-bcfd-4e60-a7e4-359076bc334a nodeName:}" failed. No retries permitted until 2026-01-27 17:21:19.306762369 +0000 UTC m=+1454.405735918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts") pod "nova-cell1-544e-account-create-update-6qhf4" (UID: "7472ee77-bcfd-4e60-a7e4-359076bc334a") : configmap "openstack-cell1-scripts" not found Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.810078 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.832774 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-m6m76"] Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.833025 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-m6m76" podUID="df9b6856-04b3-4630-b200-d99636bdb2fb" containerName="openstack-network-exporter" containerID="cri-o://17f19c76a2ac6d447b4808202544c8e5fab56d8363f7e4b4d465252ee3ed9eb6" gracePeriod=30 Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.838119 5049 projected.go:194] Error preparing data for projected volume kube-api-access-5chqz for pod openstack/nova-cell1-544e-account-create-update-6qhf4: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.842378 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz podName:7472ee77-bcfd-4e60-a7e4-359076bc334a nodeName:}" failed. No retries permitted until 2026-01-27 17:21:19.338169051 +0000 UTC m=+1454.437142600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5chqz" (UniqueName: "kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz") pod "nova-cell1-544e-account-create-update-6qhf4" (UID: "7472ee77-bcfd-4e60-a7e4-359076bc334a") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.850879 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pv2qx"] Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.894984 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79 is running failed: container process not found" containerID="c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.895986 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79 is running failed: container process not found" containerID="c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.896560 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79 is running failed: container process not found" containerID="c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.896617 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="ovsdbserver-sb" Jan 27 17:21:18 crc kubenswrapper[5049]: I0127 17:21:18.935239 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-7s8s5"] Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.962768 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:18 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:18 crc kubenswrapper[5049]: Jan 27 17:21:18 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:18 crc kubenswrapper[5049]: Jan 27 17:21:18 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:18 crc kubenswrapper[5049]: Jan 27 17:21:18 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:18 crc kubenswrapper[5049]: Jan 27 17:21:18 crc kubenswrapper[5049]: if [ -n "" ]; then Jan 27 17:21:18 crc kubenswrapper[5049]: GRANT_DATABASE="" Jan 27 17:21:18 crc kubenswrapper[5049]: else Jan 27 17:21:18 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:18 crc kubenswrapper[5049]: fi Jan 27 17:21:18 crc kubenswrapper[5049]: Jan 27 17:21:18 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:18 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:18 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:18 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:18 crc kubenswrapper[5049]: # support updates Jan 27 17:21:18 crc kubenswrapper[5049]: Jan 27 17:21:18 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:18 crc kubenswrapper[5049]: E0127 17:21:18.963829 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-9swgr" podUID="f2cc976d-73bd-4d16-a1f6-84108954384f" Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.011243 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.011298 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data podName:dbb24b4b-dfbd-431f-8244-098c40f7c24f nodeName:}" failed. No retries permitted until 2026-01-27 17:21:20.011285023 +0000 UTC m=+1455.110258572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data") pod "rabbitmq-cell1-server-0" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f") : configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.044404 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9wgjf"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.102259 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9wgjf"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.122781 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h7clt"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.123260 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" podUID="f44f2f88-5083-4314-ac57-54597bca9efa" containerName="dnsmasq-dns" containerID="cri-o://0dc2349cdaf0626b81e62b3d1171002e1ba55285be948e70262b59439fb7b6e2" gracePeriod=10 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.136850 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.137076 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerName="cinder-scheduler" containerID="cri-o://1ad029fde4bbfe950ea64c277ff2274dbaf9c65f928fb3d6f8c204dfa84b5ab2" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.137416 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerName="probe" containerID="cri-o://86ae2065456e0be818d0d2f291c75fa54963a08b110e6a19c05980e7f58e4078" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.176203 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.177548 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerName="openstack-network-exporter" containerID="cri-o://21f5e0dc07fb3d38bb7e19a51a4c0dbf807f4111a67586e4958d5638d23ad1b4" gracePeriod=300 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.207376 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4llcp"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.231109 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4llcp"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.243225 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-mcq6h"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.280933 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-mcq6h"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.287895 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-4ftjm"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.294794 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerName="ovsdbserver-nb" containerID="cri-o://626c86acb733344d07e343f4289761a9f30520eda1c48c93eebace6d3cdd0601" gracePeriod=300 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.302842 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-4ftjm"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.314182 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.314936 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-server" containerID="cri-o://a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315277 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-updater" containerID="cri-o://3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315427 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="swift-recon-cron" containerID="cri-o://e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315468 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="rsync" containerID="cri-o://3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315499 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-expirer" containerID="cri-o://49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315527 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-updater" containerID="cri-o://6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315553 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-auditor" containerID="cri-o://d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315591 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-replicator" containerID="cri-o://1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315620 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-server" containerID="cri-o://eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315831 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-reaper" containerID="cri-o://c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315873 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-auditor" containerID="cri-o://a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315905 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-replicator" containerID="cri-o://a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315936 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-server" containerID="cri-o://22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.315975 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-replicator" containerID="cri-o://03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.316005 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-auditor" containerID="cri-o://224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.317943 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.318085 5049 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.318192 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.318232 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts podName:7472ee77-bcfd-4e60-a7e4-359076bc334a nodeName:}" failed. No retries permitted until 2026-01-27 17:21:20.318138618 +0000 UTC m=+1455.417112167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts") pod "nova-cell1-544e-account-create-update-6qhf4" (UID: "7472ee77-bcfd-4e60-a7e4-359076bc334a") : configmap "openstack-cell1-scripts" not found Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.318247 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data podName:62ffcfe9-3e93-48ee-8d03-9b653d1bfede nodeName:}" failed. No retries permitted until 2026-01-27 17:21:21.31823982 +0000 UTC m=+1456.417213369 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data") pod "rabbitmq-server-0" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede") : configmap "rabbitmq-config-data" not found Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.342403 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.342865 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api-log" containerID="cri-o://3113dcce28048dab388fa9369937d0bd0a1fc6c1ae5f9d46acfb897247e15c0d" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.344154 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api" containerID="cri-o://166079e95268cb7e35e7bb3173c1768e058053e781153b7e92d90749146e26bf" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.358039 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9swgr"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.369631 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c4bd57ddb-fz2dp"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.370542 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c4bd57ddb-fz2dp" podUID="85620b2d-c74a-4c51-8129-c747016dc357" containerName="placement-log" containerID="cri-o://11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.370818 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c4bd57ddb-fz2dp" podUID="85620b2d-c74a-4c51-8129-c747016dc357" containerName="placement-api" containerID="cri-o://2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.382159 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.384236 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerName="glance-log" containerID="cri-o://2cade812917319aaec34ab2b32477c1d71dca9c03ae47024b7ad8adb5f1b00d0" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.384762 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerName="glance-httpd" containerID="cri-o://d944265d72bf3afb4b6b73f0c7c83289738cd7f4ed8517272ac5b673ffa17c8f" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.410938 5049 generic.go:334] "Generic (PLEG): container finished" podID="051db122-80f6-47fc-8d5c-5244d92e593d" containerID="bc748ff2fbd71fb24f80f8b730d7367d5fd71e407cbaf62490be6b914c76b0a8" exitCode=2 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.411024 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"051db122-80f6-47fc-8d5c-5244d92e593d","Type":"ContainerDied","Data":"bc748ff2fbd71fb24f80f8b730d7367d5fd71e407cbaf62490be6b914c76b0a8"} Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.413333 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-x4ljv"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.421683 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5chqz\" (UniqueName: \"kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.426230 5049 projected.go:194] Error preparing data for projected volume kube-api-access-5chqz for pod openstack/nova-cell1-544e-account-create-update-6qhf4: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.426295 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz podName:7472ee77-bcfd-4e60-a7e4-359076bc334a nodeName:}" failed. No retries permitted until 2026-01-27 17:21:20.426276194 +0000 UTC m=+1455.525249743 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5chqz" (UniqueName: "kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz") pod "nova-cell1-544e-account-create-update-6qhf4" (UID: "7472ee77-bcfd-4e60-a7e4-359076bc334a") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.429667 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-37a8-account-create-update-zmbd7"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.437462 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-x4ljv"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.442248 5049 generic.go:334] "Generic (PLEG): container finished" podID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerID="21f5e0dc07fb3d38bb7e19a51a4c0dbf807f4111a67586e4958d5638d23ad1b4" exitCode=2 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.442299 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d","Type":"ContainerDied","Data":"21f5e0dc07fb3d38bb7e19a51a4c0dbf807f4111a67586e4958d5638d23ad1b4"} Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.447094 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-2bn6v"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.454042 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-2bn6v"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.461934 5049 generic.go:334] "Generic (PLEG): container finished" podID="f44f2f88-5083-4314-ac57-54597bca9efa" containerID="0dc2349cdaf0626b81e62b3d1171002e1ba55285be948e70262b59439fb7b6e2" exitCode=0 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.462039 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" event={"ID":"f44f2f88-5083-4314-ac57-54597bca9efa","Type":"ContainerDied","Data":"0dc2349cdaf0626b81e62b3d1171002e1ba55285be948e70262b59439fb7b6e2"} Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.488944 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6ffd87fcd5-fn4z7"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.489194 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6ffd87fcd5-fn4z7" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-api" containerID="cri-o://1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.489563 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6ffd87fcd5-fn4z7" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-httpd" containerID="cri-o://2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.499436 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9swgr" event={"ID":"f2cc976d-73bd-4d16-a1f6-84108954384f","Type":"ContainerStarted","Data":"c5076f3639b356f272b0784bd0d19621145ec99cc34ac71def9b727b5234f059"} Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.500338 5049 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-9swgr" secret="" err="secret \"galera-openstack-cell1-dockercfg-djmgl\" not found" Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.506508 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:19 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:19 crc kubenswrapper[5049]: Jan 27 17:21:19 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:19 crc kubenswrapper[5049]: Jan 27 17:21:19 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:19 crc kubenswrapper[5049]: Jan 27 17:21:19 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:19 crc kubenswrapper[5049]: Jan 27 17:21:19 crc kubenswrapper[5049]: if [ -n "" ]; then Jan 27 17:21:19 crc kubenswrapper[5049]: GRANT_DATABASE="" Jan 27 17:21:19 crc kubenswrapper[5049]: else Jan 27 17:21:19 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:19 crc kubenswrapper[5049]: fi Jan 27 17:21:19 crc kubenswrapper[5049]: Jan 27 17:21:19 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:19 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:19 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:19 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:19 crc kubenswrapper[5049]: # support updates Jan 27 17:21:19 crc kubenswrapper[5049]: Jan 27 17:21:19 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.518990 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-9swgr" podUID="f2cc976d-73bd-4d16-a1f6-84108954384f" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.524787 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a9fff683-8d1a-4a8c-b45f-8846c09a6f51/ovsdbserver-sb/0.log" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.537395 5049 generic.go:334] "Generic (PLEG): container finished" podID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerID="5662e99c2eaeb51406aed793385fad5230b1d5921534ab4909d10f8d999bf0f2" exitCode=2 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.537410 5049 generic.go:334] "Generic (PLEG): container finished" podID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerID="c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79" exitCode=143 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.537507 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9fff683-8d1a-4a8c-b45f-8846c09a6f51","Type":"ContainerDied","Data":"5662e99c2eaeb51406aed793385fad5230b1d5921534ab4909d10f8d999bf0f2"} Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.537531 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9fff683-8d1a-4a8c-b45f-8846c09a6f51","Type":"ContainerDied","Data":"c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79"} Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.551614 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m6m76_df9b6856-04b3-4630-b200-d99636bdb2fb/openstack-network-exporter/0.log" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.551702 5049 generic.go:334] "Generic (PLEG): container finished" podID="df9b6856-04b3-4630-b200-d99636bdb2fb" containerID="17f19c76a2ac6d447b4808202544c8e5fab56d8363f7e4b4d465252ee3ed9eb6" exitCode=2 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.551750 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m6m76" event={"ID":"df9b6856-04b3-4630-b200-d99636bdb2fb","Type":"ContainerDied","Data":"17f19c76a2ac6d447b4808202544c8e5fab56d8363f7e4b4d465252ee3ed9eb6"} Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.583275 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.586264 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerName="glance-log" containerID="cri-o://43c820e42c8a751a91420a3b6d5e21201fc9f6a6613e57796cab346cad30e3d9" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.587109 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerName="glance-httpd" containerID="cri-o://5054a75f289d476269fdd4cec1f526a79442c1e4b02d766eec63bd20c154c9f8" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.709492 5049 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.709554 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts podName:f2cc976d-73bd-4d16-a1f6-84108954384f nodeName:}" failed. No retries permitted until 2026-01-27 17:21:20.2095411 +0000 UTC m=+1455.308514649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts") pod "root-account-create-update-9swgr" (UID: "f2cc976d-73bd-4d16-a1f6-84108954384f") : configmap "openstack-cell1-scripts" not found Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.741958 5049 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 27 17:21:19 crc kubenswrapper[5049]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 17:21:19 crc kubenswrapper[5049]: + source /usr/local/bin/container-scripts/functions Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNBridge=br-int Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNRemote=tcp:localhost:6642 Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNEncapType=geneve Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNAvailabilityZones= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ EnableChassisAsGateway=true Jan 27 17:21:19 crc kubenswrapper[5049]: ++ PhysicalNetworks= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNHostName= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 17:21:19 crc kubenswrapper[5049]: ++ ovs_dir=/var/lib/openvswitch Jan 27 17:21:19 crc kubenswrapper[5049]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 17:21:19 crc kubenswrapper[5049]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 17:21:19 crc kubenswrapper[5049]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 17:21:19 crc kubenswrapper[5049]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 17:21:19 crc kubenswrapper[5049]: + sleep 0.5 Jan 27 17:21:19 crc kubenswrapper[5049]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 17:21:19 crc kubenswrapper[5049]: + cleanup_ovsdb_server_semaphore Jan 27 17:21:19 crc kubenswrapper[5049]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 17:21:19 crc kubenswrapper[5049]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 17:21:19 crc kubenswrapper[5049]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-7s8s5" message=< Jan 27 17:21:19 crc kubenswrapper[5049]: Exiting ovsdb-server (5) [ OK ] Jan 27 17:21:19 crc kubenswrapper[5049]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 17:21:19 crc kubenswrapper[5049]: + source /usr/local/bin/container-scripts/functions Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNBridge=br-int Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNRemote=tcp:localhost:6642 Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNEncapType=geneve Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNAvailabilityZones= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ EnableChassisAsGateway=true Jan 27 17:21:19 crc kubenswrapper[5049]: ++ PhysicalNetworks= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNHostName= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 17:21:19 crc kubenswrapper[5049]: ++ ovs_dir=/var/lib/openvswitch Jan 27 17:21:19 crc kubenswrapper[5049]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 17:21:19 crc kubenswrapper[5049]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 17:21:19 crc kubenswrapper[5049]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 17:21:19 crc kubenswrapper[5049]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 17:21:19 crc kubenswrapper[5049]: + sleep 0.5 Jan 27 17:21:19 crc kubenswrapper[5049]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 17:21:19 crc kubenswrapper[5049]: + cleanup_ovsdb_server_semaphore Jan 27 17:21:19 crc kubenswrapper[5049]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 17:21:19 crc kubenswrapper[5049]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 17:21:19 crc kubenswrapper[5049]: > Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.741990 5049 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 27 17:21:19 crc kubenswrapper[5049]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 27 17:21:19 crc kubenswrapper[5049]: + source /usr/local/bin/container-scripts/functions Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNBridge=br-int Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNRemote=tcp:localhost:6642 Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNEncapType=geneve Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNAvailabilityZones= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ EnableChassisAsGateway=true Jan 27 17:21:19 crc kubenswrapper[5049]: ++ PhysicalNetworks= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ OVNHostName= Jan 27 17:21:19 crc kubenswrapper[5049]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 27 17:21:19 crc kubenswrapper[5049]: ++ ovs_dir=/var/lib/openvswitch Jan 27 17:21:19 crc kubenswrapper[5049]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 27 17:21:19 crc kubenswrapper[5049]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 27 17:21:19 crc kubenswrapper[5049]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 17:21:19 crc kubenswrapper[5049]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 17:21:19 crc kubenswrapper[5049]: + sleep 0.5 Jan 27 17:21:19 crc kubenswrapper[5049]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 27 17:21:19 crc kubenswrapper[5049]: + cleanup_ovsdb_server_semaphore Jan 27 17:21:19 crc kubenswrapper[5049]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 27 17:21:19 crc kubenswrapper[5049]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 27 17:21:19 crc kubenswrapper[5049]: > pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" containerID="cri-o://286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.742017 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" containerID="cri-o://286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.744105 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032e489f-aab0-40a8-b7ce-99febca8d8be" path="/var/lib/kubelet/pods/032e489f-aab0-40a8-b7ce-99febca8d8be/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.744771 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ebf7681-25b8-4db9-a7b2-86fca3ddc37c" path="/var/lib/kubelet/pods/0ebf7681-25b8-4db9-a7b2-86fca3ddc37c/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.745373 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c03cd98-3721-4e1d-9a3c-5f0547f067ff" path="/var/lib/kubelet/pods/2c03cd98-3721-4e1d-9a3c-5f0547f067ff/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.746442 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ab2df7-a5b7-463d-96c6-b2d208031c97" path="/var/lib/kubelet/pods/34ab2df7-a5b7-463d-96c6-b2d208031c97/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.746944 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4107b32c-cf40-4fe7-bd5b-00c00ff476f8" path="/var/lib/kubelet/pods/4107b32c-cf40-4fe7-bd5b-00c00ff476f8/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.747437 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4293b040-1fd4-4a5f-93e8-273d0d8509ac" path="/var/lib/kubelet/pods/4293b040-1fd4-4a5f-93e8-273d0d8509ac/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.748042 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3" path="/var/lib/kubelet/pods/755c33ae-a2bd-4f5f-bf3b-3b9d094bc0a3/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.748980 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7" path="/var/lib/kubelet/pods/97a09a7d-f35c-4a33-aa15-c2bd8bebe0f7/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.749523 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ffa9f84-a923-4ded-8dc7-a5b69acd6464" path="/var/lib/kubelet/pods/9ffa9f84-a923-4ded-8dc7-a5b69acd6464/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.750155 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa66ef85-42ab-42a2-9ed2-3cd9210d962e" path="/var/lib/kubelet/pods/aa66ef85-42ab-42a2-9ed2-3cd9210d962e/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.751124 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd627f49-e48d-4f81-a41c-3c753fdb27b3" path="/var/lib/kubelet/pods/bd627f49-e48d-4f81-a41c-3c753fdb27b3/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.751688 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16eba5e-1610-465b-b346-51692b4d7ad0" path="/var/lib/kubelet/pods/d16eba5e-1610-465b-b346-51692b4d7ad0/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.752166 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22525db-6f4e-458d-83c7-c27f295e8363" path="/var/lib/kubelet/pods/d22525db-6f4e-458d-83c7-c27f295e8363/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.753306 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d64a07f1-44c1-4b82-9ba5-23580e61ddff" path="/var/lib/kubelet/pods/d64a07f1-44c1-4b82-9ba5-23580e61ddff/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.753815 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc638766-f495-40bf-b04e-017d19ca3361" path="/var/lib/kubelet/pods/dc638766-f495-40bf-b04e-017d19ca3361/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.754282 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9b47ed-4021-4981-8975-d8af2c7d80ce" path="/var/lib/kubelet/pods/dd9b47ed-4021-4981-8975-d8af2c7d80ce/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.754748 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb509f66-608c-454f-aef3-2c52323e916b" path="/var/lib/kubelet/pods/fb509f66-608c-454f-aef3-2c52323e916b/volumes" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.755627 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ng4wf"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.755648 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ng4wf"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.755662 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.772839 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d1fd-account-create-update-nvs8x"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.782423 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-g6sl5"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.791897 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8ec5-account-create-update-dwp4c"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.813844 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" containerID="cri-o://c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.821848 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d77-account-create-update-wxtnc"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.831447 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-99mrr"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.836988 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerName="rabbitmq" containerID="cri-o://4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c" gracePeriod=604800 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.839622 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-g6sl5"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.846262 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-99mrr"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.853582 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.882563 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.882793 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-log" containerID="cri-o://1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.884369 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-metadata" containerID="cri-o://a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.896220 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-f778m"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.909901 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-sqdsx"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.921458 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-sqdsx"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.933363 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-544e-account-create-update-6qhf4"] Jan 27 17:21:19 crc kubenswrapper[5049]: E0127 17:21:19.934263 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-5chqz operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-cell1-544e-account-create-update-6qhf4" podUID="7472ee77-bcfd-4e60-a7e4-359076bc334a" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.939570 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-tnr58"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.940957 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m6m76_df9b6856-04b3-4630-b200-d99636bdb2fb/openstack-network-exporter/0.log" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.941004 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.951635 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-tnr58"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.962172 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a9fff683-8d1a-4a8c-b45f-8846c09a6f51/ovsdbserver-sb/0.log" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.962239 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.974035 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.974295 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-log" containerID="cri-o://cbe71e694f563bfe04548dc1dfb37796b16b1852241671e8d1a4cc3caf1b96a2" gracePeriod=30 Jan 27 17:21:19 crc kubenswrapper[5049]: I0127 17:21:19.974446 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-api" containerID="cri-o://b7a20b9de92877a9ab934476ffa27a2a939c104c0bb643b1807fc727b4746d30" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.001214 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d95d-account-create-update-5h486"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017168 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-combined-ca-bundle\") pod \"df9b6856-04b3-4630-b200-d99636bdb2fb\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017208 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-config\") pod \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017231 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-scripts\") pod \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017252 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdb-rundir\") pod \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017267 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovs-rundir\") pod \"df9b6856-04b3-4630-b200-d99636bdb2fb\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017281 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017307 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-metrics-certs-tls-certs\") pod \"df9b6856-04b3-4630-b200-d99636bdb2fb\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017341 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdbserver-sb-tls-certs\") pod \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017369 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-combined-ca-bundle\") pod \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017381 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovn-rundir\") pod \"df9b6856-04b3-4630-b200-d99636bdb2fb\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017419 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4tbf\" (UniqueName: \"kubernetes.io/projected/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-kube-api-access-k4tbf\") pod \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017436 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df9b6856-04b3-4630-b200-d99636bdb2fb-config\") pod \"df9b6856-04b3-4630-b200-d99636bdb2fb\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017457 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g6x5\" (UniqueName: \"kubernetes.io/projected/df9b6856-04b3-4630-b200-d99636bdb2fb-kube-api-access-5g6x5\") pod \"df9b6856-04b3-4630-b200-d99636bdb2fb\" (UID: \"df9b6856-04b3-4630-b200-d99636bdb2fb\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.017518 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-metrics-certs-tls-certs\") pod \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\" (UID: \"a9fff683-8d1a-4a8c-b45f-8846c09a6f51\") " Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.018119 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.018165 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data podName:dbb24b4b-dfbd-431f-8244-098c40f7c24f nodeName:}" failed. No retries permitted until 2026-01-27 17:21:22.018151445 +0000 UTC m=+1457.117124994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data") pod "rabbitmq-cell1-server-0" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f") : configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.018700 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "df9b6856-04b3-4630-b200-d99636bdb2fb" (UID: "df9b6856-04b3-4630-b200-d99636bdb2fb"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.019717 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-config" (OuterVolumeSpecName: "config") pod "a9fff683-8d1a-4a8c-b45f-8846c09a6f51" (UID: "a9fff683-8d1a-4a8c-b45f-8846c09a6f51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.020133 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-scripts" (OuterVolumeSpecName: "scripts") pod "a9fff683-8d1a-4a8c-b45f-8846c09a6f51" (UID: "a9fff683-8d1a-4a8c-b45f-8846c09a6f51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.020419 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a9fff683-8d1a-4a8c-b45f-8846c09a6f51" (UID: "a9fff683-8d1a-4a8c-b45f-8846c09a6f51"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.020450 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "df9b6856-04b3-4630-b200-d99636bdb2fb" (UID: "df9b6856-04b3-4630-b200-d99636bdb2fb"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.026567 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffdb84acf31942996807c242b98114c9c8d67e2eeaa568117f878ad3675f41d8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.026734 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xh8l6"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.034198 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df9b6856-04b3-4630-b200-d99636bdb2fb-config" (OuterVolumeSpecName: "config") pod "df9b6856-04b3-4630-b200-d99636bdb2fb" (UID: "df9b6856-04b3-4630-b200-d99636bdb2fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.034297 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffdb84acf31942996807c242b98114c9c8d67e2eeaa568117f878ad3675f41d8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.047064 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xh8l6"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.049849 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffdb84acf31942996807c242b98114c9c8d67e2eeaa568117f878ad3675f41d8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.049890 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="ovn-northd" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.052131 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.054557 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-575b565ff8-wcjw4"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.055797 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-kube-api-access-k4tbf" (OuterVolumeSpecName: "kube-api-access-k4tbf") pod "a9fff683-8d1a-4a8c-b45f-8846c09a6f51" (UID: "a9fff683-8d1a-4a8c-b45f-8846c09a6f51"). InnerVolumeSpecName "kube-api-access-k4tbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.064192 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-973c-account-create-update-bz54b"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.066684 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a9fff683-8d1a-4a8c-b45f-8846c09a6f51" (UID: "a9fff683-8d1a-4a8c-b45f-8846c09a6f51"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.074519 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-csqpn"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.080329 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9b6856-04b3-4630-b200-d99636bdb2fb-kube-api-access-5g6x5" (OuterVolumeSpecName: "kube-api-access-5g6x5") pod "df9b6856-04b3-4630-b200-d99636bdb2fb" (UID: "df9b6856-04b3-4630-b200-d99636bdb2fb"). InnerVolumeSpecName "kube-api-access-5g6x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.084777 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-csqpn"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.097499 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c9f58f99c-tq7mf"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.109199 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.110988 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ee012087-89b0-49aa-bac7-4cd715e80294" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.116599 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.119044 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-nb\") pod \"f44f2f88-5083-4314-ac57-54597bca9efa\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.119134 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2wdc\" (UniqueName: \"kubernetes.io/projected/f44f2f88-5083-4314-ac57-54597bca9efa-kube-api-access-j2wdc\") pod \"f44f2f88-5083-4314-ac57-54597bca9efa\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.119303 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-svc\") pod \"f44f2f88-5083-4314-ac57-54597bca9efa\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.119449 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-swift-storage-0\") pod \"f44f2f88-5083-4314-ac57-54597bca9efa\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.119471 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-config\") pod \"f44f2f88-5083-4314-ac57-54597bca9efa\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.119512 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-sb\") pod \"f44f2f88-5083-4314-ac57-54597bca9efa\" (UID: \"f44f2f88-5083-4314-ac57-54597bca9efa\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120473 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120494 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120503 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120521 5049 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120540 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120548 5049 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/df9b6856-04b3-4630-b200-d99636bdb2fb-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120563 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4tbf\" (UniqueName: \"kubernetes.io/projected/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-kube-api-access-k4tbf\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120572 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df9b6856-04b3-4630-b200-d99636bdb2fb-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.120581 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g6x5\" (UniqueName: \"kubernetes.io/projected/df9b6856-04b3-4630-b200-d99636bdb2fb-kube-api-access-5g6x5\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.125374 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9swgr"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.129955 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-fb468df94-7s5tf"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.131079 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerName="barbican-keystone-listener-log" containerID="cri-o://2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.133094 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerName="barbican-keystone-listener" containerID="cri-o://50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.135847 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.136189 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-575b565ff8-wcjw4"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.141695 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.143806 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.144576 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.144698 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.144969 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-c9f58f99c-tq7mf"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.157042 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.158634 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.158691 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:21:20 crc kubenswrapper[5049]: W0127 17:21:20.161076 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90792618_1456_45fe_9249_d31ad3b1a682.slice/crio-a5c4f395da1b362ed6fa280cae2a84adfa6a174e6110b8cbaf8b77fb8888da9c WatchSource:0}: Error finding container a5c4f395da1b362ed6fa280cae2a84adfa6a174e6110b8cbaf8b77fb8888da9c: Status 404 returned error can't find the container with id a5c4f395da1b362ed6fa280cae2a84adfa6a174e6110b8cbaf8b77fb8888da9c Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.166015 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5d9fdfc85c-bpzmb"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.166245 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerName="barbican-worker-log" containerID="cri-o://88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.166491 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f44f2f88-5083-4314-ac57-54597bca9efa-kube-api-access-j2wdc" (OuterVolumeSpecName: "kube-api-access-j2wdc") pod "f44f2f88-5083-4314-ac57-54597bca9efa" (UID: "f44f2f88-5083-4314-ac57-54597bca9efa"). InnerVolumeSpecName "kube-api-access-j2wdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.166586 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerName="barbican-worker" containerID="cri-o://6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.187649 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c874955f4-txmc8"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.190211 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c874955f4-txmc8" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerName="barbican-api-log" containerID="cri-o://35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.191506 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c874955f4-txmc8" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerName="barbican-api" containerID="cri-o://d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.201441 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85447fcffb-gb5mq"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.205981 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:20 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: if [ -n "glance" ]; then Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="glance" Jan 27 17:21:20 crc kubenswrapper[5049]: else Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:20 crc kubenswrapper[5049]: fi Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:20 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:20 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:20 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:20 crc kubenswrapper[5049]: # support updates Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.206190 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:20 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: if [ -n "placement" ]; then Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="placement" Jan 27 17:21:20 crc kubenswrapper[5049]: else Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:20 crc kubenswrapper[5049]: fi Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:20 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:20 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:20 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:20 crc kubenswrapper[5049]: # support updates Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.207150 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-d1fd-account-create-update-nvs8x" podUID="90792618-1456-45fe-9249-d31ad3b1a682" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.207400 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-8ec5-account-create-update-dwp4c" podUID="45a74888-1276-4ef7-95f7-939c1df326b6" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.215021 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:20 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: if [ -n "cinder" ]; then Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="cinder" Jan 27 17:21:20 crc kubenswrapper[5049]: else Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:20 crc kubenswrapper[5049]: fi Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:20 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:20 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:20 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:20 crc kubenswrapper[5049]: # support updates Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.217554 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-37a8-account-create-update-zmbd7" podUID="a6f79ef0-54e0-45cf-a60d-2f27be25b1f6" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.229005 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2wdc\" (UniqueName: \"kubernetes.io/projected/f44f2f88-5083-4314-ac57-54597bca9efa-kube-api-access-j2wdc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.229086 5049 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.229133 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts podName:f2cc976d-73bd-4d16-a1f6-84108954384f nodeName:}" failed. No retries permitted until 2026-01-27 17:21:21.229118475 +0000 UTC m=+1456.328092024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts") pod "root-account-create-update-9swgr" (UID: "f2cc976d-73bd-4d16-a1f6-84108954384f") : configmap "openstack-cell1-scripts" not found Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.248952 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" containerName="galera" containerID="cri-o://2396b9674bf7c0eb9526c0c351d8d2c08f432f905d450d6c35283d1d84ab9751" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.285799 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.286116 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c9edd1d0-64dc-4c83-9149-04c772e4e517" containerName="nova-scheduler-scheduler" containerID="cri-o://c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362" gracePeriod=30 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.303868 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85447fcffb-gb5mq"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.331198 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8ec5-account-create-update-dwp4c"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.332558 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.332852 5049 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.332903 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts podName:7472ee77-bcfd-4e60-a7e4-359076bc334a nodeName:}" failed. No retries permitted until 2026-01-27 17:21:22.332889406 +0000 UTC m=+1457.431862955 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts") pod "nova-cell1-544e-account-create-update-6qhf4" (UID: "7472ee77-bcfd-4e60-a7e4-359076bc334a") : configmap "openstack-cell1-scripts" not found Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.350330 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d1fd-account-create-update-nvs8x"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.368532 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-37a8-account-create-update-zmbd7"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.391195 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.412314 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerName="rabbitmq" containerID="cri-o://3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462" gracePeriod=604800 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.415205 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9fff683-8d1a-4a8c-b45f-8846c09a6f51" (UID: "a9fff683-8d1a-4a8c-b45f-8846c09a6f51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.439761 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5chqz\" (UniqueName: \"kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.439911 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.439922 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.448958 5049 projected.go:194] Error preparing data for projected volume kube-api-access-5chqz for pod openstack/nova-cell1-544e-account-create-update-6qhf4: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.449024 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz podName:7472ee77-bcfd-4e60-a7e4-359076bc334a nodeName:}" failed. No retries permitted until 2026-01-27 17:21:22.449004661 +0000 UTC m=+1457.547978210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5chqz" (UniqueName: "kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz") pod "nova-cell1-544e-account-create-update-6qhf4" (UID: "7472ee77-bcfd-4e60-a7e4-359076bc334a") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.484330 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-973c-account-create-update-bz54b"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.512526 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df9b6856-04b3-4630-b200-d99636bdb2fb" (UID: "df9b6856-04b3-4630-b200-d99636bdb2fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.541629 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.569767 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:20 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: if [ -n "barbican" ]; then Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="barbican" Jan 27 17:21:20 crc kubenswrapper[5049]: else Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:20 crc kubenswrapper[5049]: fi Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:20 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:20 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:20 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:20 crc kubenswrapper[5049]: # support updates Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.571278 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-973c-account-create-update-bz54b" podUID="f3a5b314-aabd-4f2e-a9a4-fb2509b9697b" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.583426 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f44f2f88-5083-4314-ac57-54597bca9efa" (UID: "f44f2f88-5083-4314-ac57-54597bca9efa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.619795 5049 generic.go:334] "Generic (PLEG): container finished" podID="467be3c3-34b2-4cea-8785-bacf5a6a5a39" containerID="d37d30713855c20058ddaa0bf88d078a2270dcf3c57898ea717f889e47119dd9" exitCode=137 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.620473 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50fdaba242febf0c89a5e5b1f1f9299f79e2c92946600c80e613cb5faad63c00" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.632399 5049 generic.go:334] "Generic (PLEG): container finished" podID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerID="1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.632472 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29","Type":"ContainerDied","Data":"1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.635312 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "df9b6856-04b3-4630-b200-d99636bdb2fb" (UID: "df9b6856-04b3-4630-b200-d99636bdb2fb"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.637905 5049 generic.go:334] "Generic (PLEG): container finished" podID="85620b2d-c74a-4c51-8129-c747016dc357" containerID="11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.638122 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4bd57ddb-fz2dp" event={"ID":"85620b2d-c74a-4c51-8129-c747016dc357","Type":"ContainerDied","Data":"11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.640507 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.640532 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-h7clt" event={"ID":"f44f2f88-5083-4314-ac57-54597bca9efa","Type":"ContainerDied","Data":"ce19a9ad63ad7a7d9111b62e35b46b5021e21a31d395ab7826e9b632141dff7b"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.640572 5049 scope.go:117] "RemoveContainer" containerID="0dc2349cdaf0626b81e62b3d1171002e1ba55285be948e70262b59439fb7b6e2" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.647979 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df9b6856-04b3-4630-b200-d99636bdb2fb-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.648146 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.653996 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-m6m76_df9b6856-04b3-4630-b200-d99636bdb2fb/openstack-network-exporter/0.log" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.654119 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-m6m76" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.654121 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-m6m76" event={"ID":"df9b6856-04b3-4630-b200-d99636bdb2fb","Type":"ContainerDied","Data":"35fcd5ff80273dfdb97e6e8dcc190bfe7218ae99bd53a4e261f8557845a4005e"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672085 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672122 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672133 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672143 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672155 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672163 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672171 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672181 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672171 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672224 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672238 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672191 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672301 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672307 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672320 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672322 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672329 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672337 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672353 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672363 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672374 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672338 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672385 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672397 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672410 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672424 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672436 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.672396 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.679960 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-config" (OuterVolumeSpecName: "config") pod "f44f2f88-5083-4314-ac57-54597bca9efa" (UID: "f44f2f88-5083-4314-ac57-54597bca9efa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.684844 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-973c-account-create-update-bz54b" event={"ID":"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b","Type":"ContainerStarted","Data":"41c7b2fbdd696dec27401df06774f2a479c03c83553db4a9794242a410dd0ea9"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.688644 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ec5-account-create-update-dwp4c" event={"ID":"45a74888-1276-4ef7-95f7-939c1df326b6","Type":"ContainerStarted","Data":"9534c013662d1c90bf57e6988ed9a92418dc63031835a8a790a2bc1f788d969a"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.695926 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a9fff683-8d1a-4a8c-b45f-8846c09a6f51/ovsdbserver-sb/0.log" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.696101 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.696358 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9fff683-8d1a-4a8c-b45f-8846c09a6f51","Type":"ContainerDied","Data":"f05704ca09d5244d5d2ba51448fdc91ee5390f128c05aee4a283d2cdda182bce"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.700892 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1fd-account-create-update-nvs8x" event={"ID":"90792618-1456-45fe-9249-d31ad3b1a682","Type":"ContainerStarted","Data":"a5c4f395da1b362ed6fa280cae2a84adfa6a174e6110b8cbaf8b77fb8888da9c"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.707582 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85447fcffb-gb5mq" event={"ID":"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52","Type":"ContainerStarted","Data":"7c03e3771bf1bd786586f54ab25dcca250b4517db17ad28d3eef190ed8cdd2c1"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.711461 5049 generic.go:334] "Generic (PLEG): container finished" podID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerID="2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.711515 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffd87fcd5-fn4z7" event={"ID":"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5","Type":"ContainerDied","Data":"2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.726123 5049 generic.go:334] "Generic (PLEG): container finished" podID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerID="35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.726199 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c874955f4-txmc8" event={"ID":"fd8752fa-c3a1-4eba-91dc-6af200eb8168","Type":"ContainerDied","Data":"35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.743231 5049 generic.go:334] "Generic (PLEG): container finished" podID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerID="2cade812917319aaec34ab2b32477c1d71dca9c03ae47024b7ad8adb5f1b00d0" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.743320 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59384c20-c0a3-4524-9ddb-407b96e8f882","Type":"ContainerDied","Data":"2cade812917319aaec34ab2b32477c1d71dca9c03ae47024b7ad8adb5f1b00d0"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.744166 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a9fff683-8d1a-4a8c-b45f-8846c09a6f51" (UID: "a9fff683-8d1a-4a8c-b45f-8846c09a6f51"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.753937 5049 generic.go:334] "Generic (PLEG): container finished" podID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerID="cbe71e694f563bfe04548dc1dfb37796b16b1852241671e8d1a4cc3caf1b96a2" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.754034 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"294e84c0-d49f-4e45-87d5-085c7accf51e","Type":"ContainerDied","Data":"cbe71e694f563bfe04548dc1dfb37796b16b1852241671e8d1a4cc3caf1b96a2"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.755700 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.755722 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.757807 5049 generic.go:334] "Generic (PLEG): container finished" podID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerID="3113dcce28048dab388fa9369937d0bd0a1fc6c1ae5f9d46acfb897247e15c0d" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.757858 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492cb82e-33fb-4fc7-85e2-7d4285e5ff00","Type":"ContainerDied","Data":"3113dcce28048dab388fa9369937d0bd0a1fc6c1ae5f9d46acfb897247e15c0d"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.757962 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a9fff683-8d1a-4a8c-b45f-8846c09a6f51" (UID: "a9fff683-8d1a-4a8c-b45f-8846c09a6f51"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.767561 5049 generic.go:334] "Generic (PLEG): container finished" podID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerID="88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.767646 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" event={"ID":"25ad8919-34a1-4d3c-8f82-a8902bc857ff","Type":"ContainerDied","Data":"88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.769925 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" event={"ID":"7b36a6d6-32ec-4c02-b274-319cb860222c","Type":"ContainerStarted","Data":"7560c386262e552991ab259cf0a76b6d1070d688c0810ffc7b01a4e88c45247b"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.769951 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" event={"ID":"7b36a6d6-32ec-4c02-b274-319cb860222c","Type":"ContainerStarted","Data":"cd9b8f8965f081ff7d30e8ddfa484f660c50c5fdc8238e8e41dc887200b46f2e"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.771466 5049 generic.go:334] "Generic (PLEG): container finished" podID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerID="43c820e42c8a751a91420a3b6d5e21201fc9f6a6613e57796cab346cad30e3d9" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.771528 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d89c9402-b4c3-4180-8a61-9e63497ebb66","Type":"ContainerDied","Data":"43c820e42c8a751a91420a3b6d5e21201fc9f6a6613e57796cab346cad30e3d9"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.777747 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d/ovsdbserver-nb/0.log" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.777794 5049 generic.go:334] "Generic (PLEG): container finished" podID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerID="626c86acb733344d07e343f4289761a9f30520eda1c48c93eebace6d3cdd0601" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.777870 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d","Type":"ContainerDied","Data":"626c86acb733344d07e343f4289761a9f30520eda1c48c93eebace6d3cdd0601"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.777894 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d","Type":"ContainerDied","Data":"6728bd8422561c26b26822a9ec1e7908e7bf65ce97cbda63ec857f3e88033fd1"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.777906 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6728bd8422561c26b26822a9ec1e7908e7bf65ce97cbda63ec857f3e88033fd1" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.791016 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d77-account-create-update-wxtnc"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.808388 5049 generic.go:334] "Generic (PLEG): container finished" podID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerID="86ae2065456e0be818d0d2f291c75fa54963a08b110e6a19c05980e7f58e4078" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.808486 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d76a4d6-b3a5-4931-9fb1-13531143ebaa","Type":"ContainerDied","Data":"86ae2065456e0be818d0d2f291c75fa54963a08b110e6a19c05980e7f58e4078"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.811580 5049 generic.go:334] "Generic (PLEG): container finished" podID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerID="2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b" exitCode=143 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.811634 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" event={"ID":"adfa2378-a75a-41b5-9ea9-71c8da89f750","Type":"ContainerDied","Data":"2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.814232 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37a8-account-create-update-zmbd7" event={"ID":"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6","Type":"ContainerStarted","Data":"45fa7081305c265dfe1f88b481c25b423d3bbcfc243ef3d23b2bb32385684eac"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.833874 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f58f99c-tq7mf" event={"ID":"e55f335e-88f4-4e41-a177-0771cfd532c4","Type":"ContainerStarted","Data":"9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.834147 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f58f99c-tq7mf" event={"ID":"e55f335e-88f4-4e41-a177-0771cfd532c4","Type":"ContainerStarted","Data":"dc0e83537d03389124c848e9ddbeeea4be2cce802e2b99006646d1207f8bed4f"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.837372 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-f778m"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.849729 5049 scope.go:117] "RemoveContainer" containerID="6d3929b43b9f145b761b25260c8babd94db0e221820b663ce447af35822c1b0e" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.849921 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d95d-account-create-update-5h486"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.856720 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9fff683-8d1a-4a8c-b45f-8846c09a6f51-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.861542 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-m6m76"] Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.869393 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-m6m76"] Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.878494 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:20 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: if [ -n "neutron" ]; then Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="neutron" Jan 27 17:21:20 crc kubenswrapper[5049]: else Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:20 crc kubenswrapper[5049]: fi Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:20 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:20 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:20 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:20 crc kubenswrapper[5049]: # support updates Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.879704 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d/ovsdbserver-nb/0.log" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.879756 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.879812 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-6d77-account-create-update-wxtnc" podUID="e35d2b53-2aed-4405-b61e-abe411cb3b42" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.885415 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f44f2f88-5083-4314-ac57-54597bca9efa" (UID: "f44f2f88-5083-4314-ac57-54597bca9efa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.904803 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.907590 5049 generic.go:334] "Generic (PLEG): container finished" podID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" exitCode=0 Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.908150 5049 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-9swgr" secret="" err="secret \"galera-openstack-cell1-dockercfg-djmgl\" not found" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.908474 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7s8s5" event={"ID":"009eaa47-1d7c-46e6-aeea-b25f77ea35a9","Type":"ContainerDied","Data":"286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5"} Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.908524 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.927374 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f44f2f88-5083-4314-ac57-54597bca9efa" (UID: "f44f2f88-5083-4314-ac57-54597bca9efa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.947618 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.948602 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f44f2f88-5083-4314-ac57-54597bca9efa" (UID: "f44f2f88-5083-4314-ac57-54597bca9efa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957413 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config-secret\") pod \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957468 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxjjf\" (UniqueName: \"kubernetes.io/projected/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-kube-api-access-dxjjf\") pod \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957564 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdbserver-nb-tls-certs\") pod \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957589 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957622 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-scripts\") pod \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957645 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdb-rundir\") pod \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957731 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-config\") pod \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957773 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-combined-ca-bundle\") pod \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957796 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-metrics-certs-tls-certs\") pod \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\" (UID: \"78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957843 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config\") pod \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957882 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-combined-ca-bundle\") pod \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.957938 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npbt9\" (UniqueName: \"kubernetes.io/projected/467be3c3-34b2-4cea-8785-bacf5a6a5a39-kube-api-access-npbt9\") pod \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\" (UID: \"467be3c3-34b2-4cea-8785-bacf5a6a5a39\") " Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.958496 5049 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.958556 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.958573 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f44f2f88-5083-4314-ac57-54597bca9efa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.959211 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-config" (OuterVolumeSpecName: "config") pod "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" (UID: "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.960726 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-scripts" (OuterVolumeSpecName: "scripts") pod "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" (UID: "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.961263 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" (UID: "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.967828 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-kube-api-access-dxjjf" (OuterVolumeSpecName: "kube-api-access-dxjjf") pod "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" (UID: "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d"). InnerVolumeSpecName "kube-api-access-dxjjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: I0127 17:21:20.971073 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467be3c3-34b2-4cea-8785-bacf5a6a5a39-kube-api-access-npbt9" (OuterVolumeSpecName: "kube-api-access-npbt9") pod "467be3c3-34b2-4cea-8785-bacf5a6a5a39" (UID: "467be3c3-34b2-4cea-8785-bacf5a6a5a39"). InnerVolumeSpecName "kube-api-access-npbt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.972483 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:20 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: if [ -n "nova_cell0" ]; then Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="nova_cell0" Jan 27 17:21:20 crc kubenswrapper[5049]: else Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:20 crc kubenswrapper[5049]: fi Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:20 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:20 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:20 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:20 crc kubenswrapper[5049]: # support updates Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.974906 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:20 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: if [ -n "" ]; then Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="" Jan 27 17:21:20 crc kubenswrapper[5049]: else Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:20 crc kubenswrapper[5049]: fi Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:20 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:20 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:20 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:20 crc kubenswrapper[5049]: # support updates Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.974950 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-80af-account-create-update-f778m" podUID="81cf45aa-76f9-41d4-9385-7796174601b0" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.975201 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:20 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: if [ -n "nova_api" ]; then Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="nova_api" Jan 27 17:21:20 crc kubenswrapper[5049]: else Jan 27 17:21:20 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:20 crc kubenswrapper[5049]: fi Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:20 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:20 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:20 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:20 crc kubenswrapper[5049]: # support updates Jan 27 17:21:20 crc kubenswrapper[5049]: Jan 27 17:21:20 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.977097 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-d95d-account-create-update-5h486" podUID="98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282" Jan 27 17:21:20 crc kubenswrapper[5049]: E0127 17:21:20.977147 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-9swgr" podUID="f2cc976d-73bd-4d16-a1f6-84108954384f" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.003296 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" (UID: "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.005441 5049 scope.go:117] "RemoveContainer" containerID="17f19c76a2ac6d447b4808202544c8e5fab56d8363f7e4b4d465252ee3ed9eb6" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.061426 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.061484 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.061584 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.061597 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.061610 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npbt9\" (UniqueName: \"kubernetes.io/projected/467be3c3-34b2-4cea-8785-bacf5a6a5a39-kube-api-access-npbt9\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.061620 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxjjf\" (UniqueName: \"kubernetes.io/projected/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-kube-api-access-dxjjf\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.079831 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.092324 5049 scope.go:117] "RemoveContainer" containerID="5662e99c2eaeb51406aed793385fad5230b1d5921534ab4909d10f8d999bf0f2" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.101379 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.145938 5049 scope.go:117] "RemoveContainer" containerID="c3146a5bf097d32d40daa89b3523de0e8cfc28cb7a623381d6ae35bbb1c89d79" Jan 27 17:21:21 crc kubenswrapper[5049]: E0127 17:21:21.265170 5049 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 27 17:21:21 crc kubenswrapper[5049]: E0127 17:21:21.265247 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts podName:f2cc976d-73bd-4d16-a1f6-84108954384f nodeName:}" failed. No retries permitted until 2026-01-27 17:21:23.265228475 +0000 UTC m=+1458.364202034 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts") pod "root-account-create-update-9swgr" (UID: "f2cc976d-73bd-4d16-a1f6-84108954384f") : configmap "openstack-cell1-scripts" not found Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.361919 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "467be3c3-34b2-4cea-8785-bacf5a6a5a39" (UID: "467be3c3-34b2-4cea-8785-bacf5a6a5a39"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.366911 5049 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: E0127 17:21:21.366992 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 17:21:21 crc kubenswrapper[5049]: E0127 17:21:21.367035 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data podName:62ffcfe9-3e93-48ee-8d03-9b653d1bfede nodeName:}" failed. No retries permitted until 2026-01-27 17:21:25.367021059 +0000 UTC m=+1460.465994608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data") pod "rabbitmq-server-0" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede") : configmap "rabbitmq-config-data" not found Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.390300 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.471154 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.491821 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "467be3c3-34b2-4cea-8785-bacf5a6a5a39" (UID: "467be3c3-34b2-4cea-8785-bacf5a6a5a39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.504764 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" (UID: "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.519149 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "467be3c3-34b2-4cea-8785-bacf5a6a5a39" (UID: "467be3c3-34b2-4cea-8785-bacf5a6a5a39"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.562712 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" (UID: "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.575502 5049 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.575533 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.575544 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.575556 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/467be3c3-34b2-4cea-8785-bacf5a6a5a39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.597771 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" (UID: "78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.677151 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c43be6-a13b-4447-b9f8-e6aeacd4b2be" path="/var/lib/kubelet/pods/09c43be6-a13b-4447-b9f8-e6aeacd4b2be/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.678142 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17b9e608-225d-4568-9309-2228a13b66f7" path="/var/lib/kubelet/pods/17b9e608-225d-4568-9309-2228a13b66f7/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.682094 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.683816 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255716e0-246f-4167-a784-df7005bade5d" path="/var/lib/kubelet/pods/255716e0-246f-4167-a784-df7005bade5d/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.684462 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467be3c3-34b2-4cea-8785-bacf5a6a5a39" path="/var/lib/kubelet/pods/467be3c3-34b2-4cea-8785-bacf5a6a5a39/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.685079 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ffaea7-3bce-404a-9717-0e0e9b49c9d4" path="/var/lib/kubelet/pods/61ffaea7-3bce-404a-9717-0e0e9b49c9d4/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.695534 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00809fd-1407-4ccf-9cd5-09cc89ac751d" path="/var/lib/kubelet/pods/a00809fd-1407-4ccf-9cd5-09cc89ac751d/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.697273 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" path="/var/lib/kubelet/pods/a9fff683-8d1a-4a8c-b45f-8846c09a6f51/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.697920 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9b6856-04b3-4630-b200-d99636bdb2fb" path="/var/lib/kubelet/pods/df9b6856-04b3-4630-b200-d99636bdb2fb/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.710946 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.724824 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.726806 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e466ed3f-24cb-4a9a-9820-c4f5a31b7982" path="/var/lib/kubelet/pods/e466ed3f-24cb-4a9a-9820-c4f5a31b7982/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.727640 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcaf6f1b-c353-436b-aeb6-23344442588b" path="/var/lib/kubelet/pods/fcaf6f1b-c353-436b-aeb6-23344442588b/volumes" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.728635 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6dbbddc5bc-5k4jm"] Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.728664 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h7clt"] Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.728691 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h7clt"] Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.728889 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerName="proxy-httpd" containerID="cri-o://d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789" gracePeriod=30 Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.729018 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerName="proxy-server" containerID="cri-o://7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215" gracePeriod=30 Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.770714 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.806307 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.821134 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.832275 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.889799 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90792618-1456-45fe-9249-d31ad3b1a682-operator-scripts\") pod \"90792618-1456-45fe-9249-d31ad3b1a682\" (UID: \"90792618-1456-45fe-9249-d31ad3b1a682\") " Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.889962 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlhpc\" (UniqueName: \"kubernetes.io/projected/45a74888-1276-4ef7-95f7-939c1df326b6-kube-api-access-qlhpc\") pod \"45a74888-1276-4ef7-95f7-939c1df326b6\" (UID: \"45a74888-1276-4ef7-95f7-939c1df326b6\") " Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.890374 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-operator-scripts\") pod \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\" (UID: \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\") " Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.890503 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpk6l\" (UniqueName: \"kubernetes.io/projected/90792618-1456-45fe-9249-d31ad3b1a682-kube-api-access-jpk6l\") pod \"90792618-1456-45fe-9249-d31ad3b1a682\" (UID: \"90792618-1456-45fe-9249-d31ad3b1a682\") " Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.890538 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a74888-1276-4ef7-95f7-939c1df326b6-operator-scripts\") pod \"45a74888-1276-4ef7-95f7-939c1df326b6\" (UID: \"45a74888-1276-4ef7-95f7-939c1df326b6\") " Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.890589 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6spr\" (UniqueName: \"kubernetes.io/projected/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-kube-api-access-g6spr\") pod \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\" (UID: \"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b\") " Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.890623 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90792618-1456-45fe-9249-d31ad3b1a682-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90792618-1456-45fe-9249-d31ad3b1a682" (UID: "90792618-1456-45fe-9249-d31ad3b1a682"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.891163 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3a5b314-aabd-4f2e-a9a4-fb2509b9697b" (UID: "f3a5b314-aabd-4f2e-a9a4-fb2509b9697b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.891620 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45a74888-1276-4ef7-95f7-939c1df326b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45a74888-1276-4ef7-95f7-939c1df326b6" (UID: "45a74888-1276-4ef7-95f7-939c1df326b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.895950 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a74888-1276-4ef7-95f7-939c1df326b6-kube-api-access-qlhpc" (OuterVolumeSpecName: "kube-api-access-qlhpc") pod "45a74888-1276-4ef7-95f7-939c1df326b6" (UID: "45a74888-1276-4ef7-95f7-939c1df326b6"). InnerVolumeSpecName "kube-api-access-qlhpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.896882 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90792618-1456-45fe-9249-d31ad3b1a682-kube-api-access-jpk6l" (OuterVolumeSpecName: "kube-api-access-jpk6l") pod "90792618-1456-45fe-9249-d31ad3b1a682" (UID: "90792618-1456-45fe-9249-d31ad3b1a682"). InnerVolumeSpecName "kube-api-access-jpk6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.899319 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpk6l\" (UniqueName: \"kubernetes.io/projected/90792618-1456-45fe-9249-d31ad3b1a682-kube-api-access-jpk6l\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.899346 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45a74888-1276-4ef7-95f7-939c1df326b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.899355 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90792618-1456-45fe-9249-d31ad3b1a682-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.899363 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlhpc\" (UniqueName: \"kubernetes.io/projected/45a74888-1276-4ef7-95f7-939c1df326b6-kube-api-access-qlhpc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.899372 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.914391 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-kube-api-access-g6spr" (OuterVolumeSpecName: "kube-api-access-g6spr") pod "f3a5b314-aabd-4f2e-a9a4-fb2509b9697b" (UID: "f3a5b314-aabd-4f2e-a9a4-fb2509b9697b"). InnerVolumeSpecName "kube-api-access-g6spr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.939275 5049 generic.go:334] "Generic (PLEG): container finished" podID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" containerID="2396b9674bf7c0eb9526c0c351d8d2c08f432f905d450d6c35283d1d84ab9751" exitCode=0 Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.939371 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95574d5f-6872-4ff3-a7a4-44a960bb46f0","Type":"ContainerDied","Data":"2396b9674bf7c0eb9526c0c351d8d2c08f432f905d450d6c35283d1d84ab9751"} Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.952583 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.953210 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f58f99c-tq7mf" event={"ID":"e55f335e-88f4-4e41-a177-0771cfd532c4","Type":"ContainerStarted","Data":"36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75"} Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.953301 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-c9f58f99c-tq7mf" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerName="barbican-worker-log" containerID="cri-o://9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81" gracePeriod=30 Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.953467 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-c9f58f99c-tq7mf" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerName="barbican-worker" containerID="cri-o://36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75" gracePeriod=30 Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.993457 5049 generic.go:334] "Generic (PLEG): container finished" podID="ee012087-89b0-49aa-bac7-4cd715e80294" containerID="b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57" exitCode=0 Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.993961 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee012087-89b0-49aa-bac7-4cd715e80294","Type":"ContainerDied","Data":"b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57"} Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.994023 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee012087-89b0-49aa-bac7-4cd715e80294","Type":"ContainerDied","Data":"72bb98d846493bdc1f2d04de947b2818340ec8bb1cd1bbb18c71b1042ab20bb3"} Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.994115 5049 scope.go:117] "RemoveContainer" containerID="b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57" Jan 27 17:21:21 crc kubenswrapper[5049]: I0127 17:21:21.993808 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.000441 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-nova-novncproxy-tls-certs\") pod \"ee012087-89b0-49aa-bac7-4cd715e80294\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.000612 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-operator-scripts\") pod \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\" (UID: \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.000822 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-combined-ca-bundle\") pod \"ee012087-89b0-49aa-bac7-4cd715e80294\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.000854 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-config-data\") pod \"ee012087-89b0-49aa-bac7-4cd715e80294\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.000908 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-vencrypt-tls-certs\") pod \"ee012087-89b0-49aa-bac7-4cd715e80294\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.000928 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnrpd\" (UniqueName: \"kubernetes.io/projected/ee012087-89b0-49aa-bac7-4cd715e80294-kube-api-access-gnrpd\") pod \"ee012087-89b0-49aa-bac7-4cd715e80294\" (UID: \"ee012087-89b0-49aa-bac7-4cd715e80294\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.001013 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6wwc\" (UniqueName: \"kubernetes.io/projected/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-kube-api-access-g6wwc\") pod \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\" (UID: \"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.001712 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6spr\" (UniqueName: \"kubernetes.io/projected/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b-kube-api-access-g6spr\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.007586 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-kube-api-access-g6wwc" (OuterVolumeSpecName: "kube-api-access-g6wwc") pod "a6f79ef0-54e0-45cf-a60d-2f27be25b1f6" (UID: "a6f79ef0-54e0-45cf-a60d-2f27be25b1f6"). InnerVolumeSpecName "kube-api-access-g6wwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.008538 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6f79ef0-54e0-45cf-a60d-2f27be25b1f6" (UID: "a6f79ef0-54e0-45cf-a60d-2f27be25b1f6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.015772 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37a8-account-create-update-zmbd7" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.015992 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37a8-account-create-update-zmbd7" event={"ID":"a6f79ef0-54e0-45cf-a60d-2f27be25b1f6","Type":"ContainerDied","Data":"45fa7081305c265dfe1f88b481c25b423d3bbcfc243ef3d23b2bb32385684eac"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.020254 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-80af-account-create-update-f778m" event={"ID":"81cf45aa-76f9-41d4-9385-7796174601b0","Type":"ContainerStarted","Data":"31f18906cc9adfcc26547661c15f4f8af8f35360ab4ccf0e319bf648195ec43d"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.023535 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.028860 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee012087-89b0-49aa-bac7-4cd715e80294-kube-api-access-gnrpd" (OuterVolumeSpecName: "kube-api-access-gnrpd") pod "ee012087-89b0-49aa-bac7-4cd715e80294" (UID: "ee012087-89b0-49aa-bac7-4cd715e80294"). InnerVolumeSpecName "kube-api-access-gnrpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.030128 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d77-account-create-update-wxtnc" event={"ID":"e35d2b53-2aed-4405-b61e-abe411cb3b42","Type":"ContainerStarted","Data":"2fbfa159e47a66c86d9b956aad55bfc58431a011094167eecda98b90a58bf451"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.036992 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-973c-account-create-update-bz54b" event={"ID":"f3a5b314-aabd-4f2e-a9a4-fb2509b9697b","Type":"ContainerDied","Data":"41c7b2fbdd696dec27401df06774f2a479c03c83553db4a9794242a410dd0ea9"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.037064 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-973c-account-create-update-bz54b" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.048611 5049 scope.go:117] "RemoveContainer" containerID="b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.053481 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57\": container with ID starting with b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57 not found: ID does not exist" containerID="b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.053544 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57"} err="failed to get container status \"b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57\": rpc error: code = NotFound desc = could not find container \"b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57\": container with ID starting with b25b75cad7936ba61f756aac47c7d67fbdfb0f4ef169358438ea19123754ec57 not found: ID does not exist" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.055405 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d1fd-account-create-update-nvs8x" event={"ID":"90792618-1456-45fe-9249-d31ad3b1a682","Type":"ContainerDied","Data":"a5c4f395da1b362ed6fa280cae2a84adfa6a174e6110b8cbaf8b77fb8888da9c"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.055487 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d1fd-account-create-update-nvs8x" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.068383 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-config-data" (OuterVolumeSpecName: "config-data") pod "ee012087-89b0-49aa-bac7-4cd715e80294" (UID: "ee012087-89b0-49aa-bac7-4cd715e80294"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.078196 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85447fcffb-gb5mq" event={"ID":"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52","Type":"ContainerStarted","Data":"778a554e4a163939392827258f2231fc8efa1afba442cb645dcf8ea6914e87df"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.078245 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85447fcffb-gb5mq" event={"ID":"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52","Type":"ContainerStarted","Data":"31532940e27611eb351095fe8d11a748df2424f7203a4ece2b697b34fe6f40f7"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.078375 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" containerID="cri-o://31532940e27611eb351095fe8d11a748df2424f7203a4ece2b697b34fe6f40f7" gracePeriod=30 Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.079380 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.079410 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.079443 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" containerID="cri-o://778a554e4a163939392827258f2231fc8efa1afba442cb645dcf8ea6914e87df" gracePeriod=30 Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.082690 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c9f58f99c-tq7mf" podStartSLOduration=5.082655185 podStartE2EDuration="5.082655185s" podCreationTimestamp="2026-01-27 17:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:21:22.009412302 +0000 UTC m=+1457.108385851" watchObservedRunningTime="2026-01-27 17:21:22.082655185 +0000 UTC m=+1457.181628734" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.105467 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-default\") pod \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.105544 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fslt2\" (UniqueName: \"kubernetes.io/projected/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kube-api-access-fslt2\") pod \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.105597 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.105660 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-operator-scripts\") pod \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.105697 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-generated\") pod \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.105731 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-combined-ca-bundle\") pod \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.105787 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-galera-tls-certs\") pod \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.105818 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kolla-config\") pod \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\" (UID: \"95574d5f-6872-4ff3-a7a4-44a960bb46f0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.106267 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.106278 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.106286 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnrpd\" (UniqueName: \"kubernetes.io/projected/ee012087-89b0-49aa-bac7-4cd715e80294-kube-api-access-gnrpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.106297 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6wwc\" (UniqueName: \"kubernetes.io/projected/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6-kube-api-access-g6wwc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.106357 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.106401 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data podName:dbb24b4b-dfbd-431f-8244-098c40f7c24f nodeName:}" failed. No retries permitted until 2026-01-27 17:21:26.106387427 +0000 UTC m=+1461.205360966 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data") pod "rabbitmq-cell1-server-0" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f") : configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.107271 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "95574d5f-6872-4ff3-a7a4-44a960bb46f0" (UID: "95574d5f-6872-4ff3-a7a4-44a960bb46f0"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.108382 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "95574d5f-6872-4ff3-a7a4-44a960bb46f0" (UID: "95574d5f-6872-4ff3-a7a4-44a960bb46f0"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.111094 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kube-api-access-fslt2" (OuterVolumeSpecName: "kube-api-access-fslt2") pod "95574d5f-6872-4ff3-a7a4-44a960bb46f0" (UID: "95574d5f-6872-4ff3-a7a4-44a960bb46f0"). InnerVolumeSpecName "kube-api-access-fslt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.111253 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "95574d5f-6872-4ff3-a7a4-44a960bb46f0" (UID: "95574d5f-6872-4ff3-a7a4-44a960bb46f0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.111444 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95574d5f-6872-4ff3-a7a4-44a960bb46f0" (UID: "95574d5f-6872-4ff3-a7a4-44a960bb46f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.114489 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee012087-89b0-49aa-bac7-4cd715e80294" (UID: "ee012087-89b0-49aa-bac7-4cd715e80294"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.128986 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8ec5-account-create-update-dwp4c" event={"ID":"45a74888-1276-4ef7-95f7-939c1df326b6","Type":"ContainerDied","Data":"9534c013662d1c90bf57e6988ed9a92418dc63031835a8a790a2bc1f788d969a"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.129093 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8ec5-account-create-update-dwp4c" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.154348 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d95d-account-create-update-5h486" event={"ID":"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282","Type":"ContainerStarted","Data":"bfd5394d07da4ec19d5f5c766928b6146e884f7376bd2374174e5795dacba669"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.183720 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-csmlt"] Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184105 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44f2f88-5083-4314-ac57-54597bca9efa" containerName="init" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184120 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44f2f88-5083-4314-ac57-54597bca9efa" containerName="init" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184134 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="ovsdbserver-sb" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184140 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="ovsdbserver-sb" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184153 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184159 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184168 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f44f2f88-5083-4314-ac57-54597bca9efa" containerName="dnsmasq-dns" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184176 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f44f2f88-5083-4314-ac57-54597bca9efa" containerName="dnsmasq-dns" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184190 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee012087-89b0-49aa-bac7-4cd715e80294" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184196 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee012087-89b0-49aa-bac7-4cd715e80294" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184204 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9b6856-04b3-4630-b200-d99636bdb2fb" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184209 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9b6856-04b3-4630-b200-d99636bdb2fb" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184225 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184231 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184247 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" containerName="galera" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184252 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" containerName="galera" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184269 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerName="ovsdbserver-nb" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184275 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerName="ovsdbserver-nb" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.184284 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" containerName="mysql-bootstrap" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184290 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" containerName="mysql-bootstrap" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184454 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee012087-89b0-49aa-bac7-4cd715e80294" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184466 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" containerName="galera" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184479 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184488 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184496 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9fff683-8d1a-4a8c-b45f-8846c09a6f51" containerName="ovsdbserver-sb" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184505 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f44f2f88-5083-4314-ac57-54597bca9efa" containerName="dnsmasq-dns" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184515 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9b6856-04b3-4630-b200-d99636bdb2fb" containerName="openstack-network-exporter" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.184526 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" containerName="ovsdbserver-nb" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.185095 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.185454 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.185855 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.185877 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.185963 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" event={"ID":"7b36a6d6-32ec-4c02-b274-319cb860222c","Type":"ContainerStarted","Data":"5212effe4a5f9b45d0cde4b2a3588776508dc67c73e64ac7ea425d6da44261e9"} Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.186178 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerName="barbican-keystone-listener-log" containerID="cri-o://7560c386262e552991ab259cf0a76b6d1070d688c0810ffc7b01a4e88c45247b" gracePeriod=30 Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.186379 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerName="barbican-keystone-listener" containerID="cri-o://5212effe4a5f9b45d0cde4b2a3588776508dc67c73e64ac7ea425d6da44261e9" gracePeriod=30 Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.194870 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "ee012087-89b0-49aa-bac7-4cd715e80294" (UID: "ee012087-89b0-49aa-bac7-4cd715e80294"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.204265 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.210134 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fslt2\" (UniqueName: \"kubernetes.io/projected/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kube-api-access-fslt2\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.210162 5049 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.210171 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.210179 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.210189 5049 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.210199 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/95574d5f-6872-4ff3-a7a4-44a960bb46f0-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.210207 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.231358 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-973c-account-create-update-bz54b"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.259424 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-973c-account-create-update-bz54b"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.278885 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "95574d5f-6872-4ff3-a7a4-44a960bb46f0" (UID: "95574d5f-6872-4ff3-a7a4-44a960bb46f0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.286034 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-csmlt"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.294426 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95574d5f-6872-4ff3-a7a4-44a960bb46f0" (UID: "95574d5f-6872-4ff3-a7a4-44a960bb46f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.309765 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d1fd-account-create-update-nvs8x"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.319869 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d1fd-account-create-update-nvs8x"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.326843 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-operator-scripts\") pod \"root-account-create-update-csmlt\" (UID: \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\") " pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.327043 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzx7\" (UniqueName: \"kubernetes.io/projected/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-kube-api-access-krzx7\") pod \"root-account-create-update-csmlt\" (UID: \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\") " pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.327100 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.327122 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.329209 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "ee012087-89b0-49aa-bac7-4cd715e80294" (UID: "ee012087-89b0-49aa-bac7-4cd715e80294"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.348751 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-37a8-account-create-update-zmbd7"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.362985 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85447fcffb-gb5mq" podStartSLOduration=5.362968037 podStartE2EDuration="5.362968037s" podCreationTimestamp="2026-01-27 17:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:21:22.237459912 +0000 UTC m=+1457.336433471" watchObservedRunningTime="2026-01-27 17:21:22.362968037 +0000 UTC m=+1457.461941596" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.365013 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-37a8-account-create-update-zmbd7"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.414969 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.422566 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" podStartSLOduration=5.422546869 podStartE2EDuration="5.422546869s" podCreationTimestamp="2026-01-27 17:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:21:22.29412775 +0000 UTC m=+1457.393101299" watchObservedRunningTime="2026-01-27 17:21:22.422546869 +0000 UTC m=+1457.521520408" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.430594 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.430636 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krzx7\" (UniqueName: \"kubernetes.io/projected/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-kube-api-access-krzx7\") pod \"root-account-create-update-csmlt\" (UID: \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\") " pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.430670 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-operator-scripts\") pod \"root-account-create-update-csmlt\" (UID: \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\") " pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.430749 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.430760 5049 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee012087-89b0-49aa-bac7-4cd715e80294-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.430817 5049 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.430904 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts podName:7472ee77-bcfd-4e60-a7e4-359076bc334a nodeName:}" failed. No retries permitted until 2026-01-27 17:21:26.430887428 +0000 UTC m=+1461.529860977 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts") pod "nova-cell1-544e-account-create-update-6qhf4" (UID: "7472ee77-bcfd-4e60-a7e4-359076bc334a") : configmap "openstack-cell1-scripts" not found Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.431698 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-operator-scripts\") pod \"root-account-create-update-csmlt\" (UID: \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\") " pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.468386 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krzx7\" (UniqueName: \"kubernetes.io/projected/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-kube-api-access-krzx7\") pod \"root-account-create-update-csmlt\" (UID: \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\") " pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.469897 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "95574d5f-6872-4ff3-a7a4-44a960bb46f0" (UID: "95574d5f-6872-4ff3-a7a4-44a960bb46f0"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.470540 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-544e-account-create-update-6qhf4"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.483529 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-544e-account-create-update-6qhf4"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.533140 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5chqz\" (UniqueName: \"kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz\") pod \"nova-cell1-544e-account-create-update-6qhf4\" (UID: \"7472ee77-bcfd-4e60-a7e4-359076bc334a\") " pod="openstack/nova-cell1-544e-account-create-update-6qhf4" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.534601 5049 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/95574d5f-6872-4ff3-a7a4-44a960bb46f0-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.536274 5049 projected.go:194] Error preparing data for projected volume kube-api-access-5chqz for pod openstack/nova-cell1-544e-account-create-update-6qhf4: failed to fetch token: pod "nova-cell1-544e-account-create-update-6qhf4" not found Jan 27 17:21:22 crc kubenswrapper[5049]: E0127 17:21:22.536335 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz podName:7472ee77-bcfd-4e60-a7e4-359076bc334a nodeName:}" failed. No retries permitted until 2026-01-27 17:21:26.536317237 +0000 UTC m=+1461.635290786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5chqz" (UniqueName: "kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz") pod "nova-cell1-544e-account-create-update-6qhf4" (UID: "7472ee77-bcfd-4e60-a7e4-359076bc334a") : failed to fetch token: pod "nova-cell1-544e-account-create-update-6qhf4" not found Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.561852 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8ec5-account-create-update-dwp4c"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.569250 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8ec5-account-create-update-dwp4c"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.581107 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.581400 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.637433 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7472ee77-bcfd-4e60-a7e4-359076bc334a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.637458 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5chqz\" (UniqueName: \"kubernetes.io/projected/7472ee77-bcfd-4e60-a7e4-359076bc334a-kube-api-access-5chqz\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.721592 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.758832 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.858707 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cb5b\" (UniqueName: \"kubernetes.io/projected/81cf45aa-76f9-41d4-9385-7796174601b0-kube-api-access-8cb5b\") pod \"81cf45aa-76f9-41d4-9385-7796174601b0\" (UID: \"81cf45aa-76f9-41d4-9385-7796174601b0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.859208 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81cf45aa-76f9-41d4-9385-7796174601b0-operator-scripts\") pod \"81cf45aa-76f9-41d4-9385-7796174601b0\" (UID: \"81cf45aa-76f9-41d4-9385-7796174601b0\") " Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.860296 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81cf45aa-76f9-41d4-9385-7796174601b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81cf45aa-76f9-41d4-9385-7796174601b0" (UID: "81cf45aa-76f9-41d4-9385-7796174601b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.860522 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81cf45aa-76f9-41d4-9385-7796174601b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.854782 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.866510 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.869830 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81cf45aa-76f9-41d4-9385-7796174601b0-kube-api-access-8cb5b" (OuterVolumeSpecName: "kube-api-access-8cb5b") pod "81cf45aa-76f9-41d4-9385-7796174601b0" (UID: "81cf45aa-76f9-41d4-9385-7796174601b0"). InnerVolumeSpecName "kube-api-access-8cb5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.941275 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.942018 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="proxy-httpd" containerID="cri-o://699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9" gracePeriod=30 Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.942075 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="sg-core" containerID="cri-o://4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4" gracePeriod=30 Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.942250 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="ceilometer-notification-agent" containerID="cri-o://89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71" gracePeriod=30 Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.942303 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="ceilometer-central-agent" containerID="cri-o://0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69" gracePeriod=30 Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.942355 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.166:8776/healthcheck\": dial tcp 10.217.0.166:8776: connect: connection refused" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.964651 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cb5b\" (UniqueName: \"kubernetes.io/projected/81cf45aa-76f9-41d4-9385-7796174601b0-kube-api-access-8cb5b\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.971728 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.980532 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:21:22 crc kubenswrapper[5049]: I0127 17:21:22.980759 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b915091f-1f89-4602-8b1f-2214883644e0" containerName="kube-state-metrics" containerID="cri-o://ff5438e2bba7d976fe7a35950c7d8f3e8815181c6b08e323c26c90c5eef3ef12" gracePeriod=30 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.009509 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.042883 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.066057 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-operator-scripts\") pod \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\" (UID: \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.066385 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e35d2b53-2aed-4405-b61e-abe411cb3b42-operator-scripts\") pod \"e35d2b53-2aed-4405-b61e-abe411cb3b42\" (UID: \"e35d2b53-2aed-4405-b61e-abe411cb3b42\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.066481 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts\") pod \"f2cc976d-73bd-4d16-a1f6-84108954384f\" (UID: \"f2cc976d-73bd-4d16-a1f6-84108954384f\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.066553 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwbrk\" (UniqueName: \"kubernetes.io/projected/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-kube-api-access-cwbrk\") pod \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\" (UID: \"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.066577 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46tr\" (UniqueName: \"kubernetes.io/projected/f2cc976d-73bd-4d16-a1f6-84108954384f-kube-api-access-s46tr\") pod \"f2cc976d-73bd-4d16-a1f6-84108954384f\" (UID: \"f2cc976d-73bd-4d16-a1f6-84108954384f\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.066667 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tsf7\" (UniqueName: \"kubernetes.io/projected/e35d2b53-2aed-4405-b61e-abe411cb3b42-kube-api-access-2tsf7\") pod \"e35d2b53-2aed-4405-b61e-abe411cb3b42\" (UID: \"e35d2b53-2aed-4405-b61e-abe411cb3b42\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.066724 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282" (UID: "98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.067116 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2cc976d-73bd-4d16-a1f6-84108954384f" (UID: "f2cc976d-73bd-4d16-a1f6-84108954384f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.067345 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.067358 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2cc976d-73bd-4d16-a1f6-84108954384f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.067853 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35d2b53-2aed-4405-b61e-abe411cb3b42-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e35d2b53-2aed-4405-b61e-abe411cb3b42" (UID: "e35d2b53-2aed-4405-b61e-abe411cb3b42"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.074164 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35d2b53-2aed-4405-b61e-abe411cb3b42-kube-api-access-2tsf7" (OuterVolumeSpecName: "kube-api-access-2tsf7") pod "e35d2b53-2aed-4405-b61e-abe411cb3b42" (UID: "e35d2b53-2aed-4405-b61e-abe411cb3b42"). InnerVolumeSpecName "kube-api-access-2tsf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.074224 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2cc976d-73bd-4d16-a1f6-84108954384f-kube-api-access-s46tr" (OuterVolumeSpecName: "kube-api-access-s46tr") pod "f2cc976d-73bd-4d16-a1f6-84108954384f" (UID: "f2cc976d-73bd-4d16-a1f6-84108954384f"). InnerVolumeSpecName "kube-api-access-s46tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.078595 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-kube-api-access-cwbrk" (OuterVolumeSpecName: "kube-api-access-cwbrk") pod "98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282" (UID: "98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282"). InnerVolumeSpecName "kube-api-access-cwbrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.132655 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:56864->10.217.0.203:8775: read: connection reset by peer" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.133120 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": read tcp 10.217.0.2:56862->10.217.0.203:8775: read: connection reset by peer" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.168247 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e35d2b53-2aed-4405-b61e-abe411cb3b42-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.168288 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwbrk\" (UniqueName: \"kubernetes.io/projected/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282-kube-api-access-cwbrk\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.168302 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s46tr\" (UniqueName: \"kubernetes.io/projected/f2cc976d-73bd-4d16-a1f6-84108954384f-kube-api-access-s46tr\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.168314 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tsf7\" (UniqueName: \"kubernetes.io/projected/e35d2b53-2aed-4405-b61e-abe411cb3b42-kube-api-access-2tsf7\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.240683 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.241035 5049 generic.go:334] "Generic (PLEG): container finished" podID="85620b2d-c74a-4c51-8129-c747016dc357" containerID="2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b" exitCode=0 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.241081 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4bd57ddb-fz2dp" event={"ID":"85620b2d-c74a-4c51-8129-c747016dc357","Type":"ContainerDied","Data":"2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.241107 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c4bd57ddb-fz2dp" event={"ID":"85620b2d-c74a-4c51-8129-c747016dc357","Type":"ContainerDied","Data":"13733998ed833daa8145197efcef30ead77e37441e6640da09acba5580372e51"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.241122 5049 scope.go:117] "RemoveContainer" containerID="2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.277537 5049 generic.go:334] "Generic (PLEG): container finished" podID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerID="d944265d72bf3afb4b6b73f0c7c83289738cd7f4ed8517272ac5b673ffa17c8f" exitCode=0 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.277599 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59384c20-c0a3-4524-9ddb-407b96e8f882","Type":"ContainerDied","Data":"d944265d72bf3afb4b6b73f0c7c83289738cd7f4ed8517272ac5b673ffa17c8f"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.297348 5049 generic.go:334] "Generic (PLEG): container finished" podID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerID="5054a75f289d476269fdd4cec1f526a79442c1e4b02d766eec63bd20c154c9f8" exitCode=0 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.297471 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d89c9402-b4c3-4180-8a61-9e63497ebb66","Type":"ContainerDied","Data":"5054a75f289d476269fdd4cec1f526a79442c1e4b02d766eec63bd20c154c9f8"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.313791 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d77-account-create-update-wxtnc" event={"ID":"e35d2b53-2aed-4405-b61e-abe411cb3b42","Type":"ContainerDied","Data":"2fbfa159e47a66c86d9b956aad55bfc58431a011094167eecda98b90a58bf451"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.313866 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d77-account-create-update-wxtnc" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.372650 5049 generic.go:334] "Generic (PLEG): container finished" podID="b915091f-1f89-4602-8b1f-2214883644e0" containerID="ff5438e2bba7d976fe7a35950c7d8f3e8815181c6b08e323c26c90c5eef3ef12" exitCode=2 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.372876 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b915091f-1f89-4602-8b1f-2214883644e0","Type":"ContainerDied","Data":"ff5438e2bba7d976fe7a35950c7d8f3e8815181c6b08e323c26c90c5eef3ef12"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.378775 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-combined-ca-bundle\") pod \"85620b2d-c74a-4c51-8129-c747016dc357\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.399879 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-694c9\" (UniqueName: \"kubernetes.io/projected/85620b2d-c74a-4c51-8129-c747016dc357-kube-api-access-694c9\") pod \"85620b2d-c74a-4c51-8129-c747016dc357\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.399931 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85620b2d-c74a-4c51-8129-c747016dc357-logs\") pod \"85620b2d-c74a-4c51-8129-c747016dc357\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.399965 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-public-tls-certs\") pod \"85620b2d-c74a-4c51-8129-c747016dc357\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.400012 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-scripts\") pod \"85620b2d-c74a-4c51-8129-c747016dc357\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.400047 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-internal-tls-certs\") pod \"85620b2d-c74a-4c51-8129-c747016dc357\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.400127 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-config-data\") pod \"85620b2d-c74a-4c51-8129-c747016dc357\" (UID: \"85620b2d-c74a-4c51-8129-c747016dc357\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.426954 5049 generic.go:334] "Generic (PLEG): container finished" podID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerID="7560c386262e552991ab259cf0a76b6d1070d688c0810ffc7b01a4e88c45247b" exitCode=143 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.427032 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" event={"ID":"7b36a6d6-32ec-4c02-b274-319cb860222c","Type":"ContainerDied","Data":"7560c386262e552991ab259cf0a76b6d1070d688c0810ffc7b01a4e88c45247b"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.442073 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85620b2d-c74a-4c51-8129-c747016dc357-kube-api-access-694c9" (OuterVolumeSpecName: "kube-api-access-694c9") pod "85620b2d-c74a-4c51-8129-c747016dc357" (UID: "85620b2d-c74a-4c51-8129-c747016dc357"). InnerVolumeSpecName "kube-api-access-694c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.444396 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85620b2d-c74a-4c51-8129-c747016dc357-logs" (OuterVolumeSpecName: "logs") pod "85620b2d-c74a-4c51-8129-c747016dc357" (UID: "85620b2d-c74a-4c51-8129-c747016dc357"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.491373 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-31eb-account-create-update-hr76v"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.497694 5049 scope.go:117] "RemoveContainer" containerID="11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.497851 5049 generic.go:334] "Generic (PLEG): container finished" podID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerID="31532940e27611eb351095fe8d11a748df2424f7203a4ece2b697b34fe6f40f7" exitCode=143 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.497891 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85447fcffb-gb5mq" event={"ID":"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52","Type":"ContainerDied","Data":"31532940e27611eb351095fe8d11a748df2424f7203a4ece2b697b34fe6f40f7"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.505078 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-694c9\" (UniqueName: \"kubernetes.io/projected/85620b2d-c74a-4c51-8129-c747016dc357-kube-api-access-694c9\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.505123 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85620b2d-c74a-4c51-8129-c747016dc357-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.516351 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-31eb-account-create-update-hr76v"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.522469 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-scripts" (OuterVolumeSpecName: "scripts") pod "85620b2d-c74a-4c51-8129-c747016dc357" (UID: "85620b2d-c74a-4c51-8129-c747016dc357"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.541350 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-87m2z"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.551165 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.559027 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xrn5m"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.559399 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"95574d5f-6872-4ff3-a7a4-44a960bb46f0","Type":"ContainerDied","Data":"e6ad8c1f1f2979229b70f694b11fb76b6455e02a363c81fb2d6c41a8797289fb"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.564740 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-87m2z"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.572013 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xrn5m"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.590087 5049 scope.go:117] "RemoveContainer" containerID="2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.590341 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.590883 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-679b885964-9p8nj"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.591053 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-679b885964-9p8nj" podUID="c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" containerName="keystone-api" containerID="cri-o://064ca2d2f070edfd81bea729538cd5a51f364f47fe874ac76e3a571a97d5681c" gracePeriod=30 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.595931 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 17:21:23 crc kubenswrapper[5049]: E0127 17:21:23.599659 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b\": container with ID starting with 2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b not found: ID does not exist" containerID="2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.599721 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b"} err="failed to get container status \"2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b\": rpc error: code = NotFound desc = could not find container \"2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b\": container with ID starting with 2d46166a5884cd1f3b001c9ab4d6aa5473e16cee138019fd911e628d4749058b not found: ID does not exist" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.599746 5049 scope.go:117] "RemoveContainer" containerID="11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff" Jan 27 17:21:23 crc kubenswrapper[5049]: E0127 17:21:23.614096 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff\": container with ID starting with 11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff not found: ID does not exist" containerID="11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.614135 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff"} err="failed to get container status \"11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff\": rpc error: code = NotFound desc = could not find container \"11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff\": container with ID starting with 11f21899d008114f9b81084b04d95a0c833f10d46074769ab7100499bc045dff not found: ID does not exist" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.614159 5049 scope.go:117] "RemoveContainer" containerID="2396b9674bf7c0eb9526c0c351d8d2c08f432f905d450d6c35283d1d84ab9751" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.614382 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d77-account-create-update-wxtnc"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618229 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdffl\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-kube-api-access-tdffl\") pod \"a923a49d-7e17-40a5-975a-9f4a39f92d51\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618334 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-run-httpd\") pod \"a923a49d-7e17-40a5-975a-9f4a39f92d51\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618370 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-public-tls-certs\") pod \"a923a49d-7e17-40a5-975a-9f4a39f92d51\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618404 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-etc-swift\") pod \"a923a49d-7e17-40a5-975a-9f4a39f92d51\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618429 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-combined-ca-bundle\") pod \"a923a49d-7e17-40a5-975a-9f4a39f92d51\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618483 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-internal-tls-certs\") pod \"a923a49d-7e17-40a5-975a-9f4a39f92d51\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618512 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-log-httpd\") pod \"a923a49d-7e17-40a5-975a-9f4a39f92d51\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618541 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-config-data\") pod \"a923a49d-7e17-40a5-975a-9f4a39f92d51\" (UID: \"a923a49d-7e17-40a5-975a-9f4a39f92d51\") " Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.618950 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.630904 5049 generic.go:334] "Generic (PLEG): container finished" podID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerID="4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4" exitCode=2 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.631020 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerDied","Data":"4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.631295 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a923a49d-7e17-40a5-975a-9f4a39f92d51" (UID: "a923a49d-7e17-40a5-975a-9f4a39f92d51"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.631412 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a923a49d-7e17-40a5-975a-9f4a39f92d51" (UID: "a923a49d-7e17-40a5-975a-9f4a39f92d51"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.632060 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a923a49d-7e17-40a5-975a-9f4a39f92d51" (UID: "a923a49d-7e17-40a5-975a-9f4a39f92d51"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.637884 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-config-data" (OuterVolumeSpecName: "config-data") pod "85620b2d-c74a-4c51-8129-c747016dc357" (UID: "85620b2d-c74a-4c51-8129-c747016dc357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.640144 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d77-account-create-update-wxtnc"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.654411 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-g89zb"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.677714 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-kube-api-access-tdffl" (OuterVolumeSpecName: "kube-api-access-tdffl") pod "a923a49d-7e17-40a5-975a-9f4a39f92d51" (UID: "a923a49d-7e17-40a5-975a-9f4a39f92d51"). InnerVolumeSpecName "kube-api-access-tdffl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.682369 5049 generic.go:334] "Generic (PLEG): container finished" podID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerID="166079e95268cb7e35e7bb3173c1768e058053e781153b7e92d90749146e26bf" exitCode=0 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.685731 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "85620b2d-c74a-4c51-8129-c747016dc357" (UID: "85620b2d-c74a-4c51-8129-c747016dc357"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.691871 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85620b2d-c74a-4c51-8129-c747016dc357" (UID: "85620b2d-c74a-4c51-8129-c747016dc357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.706197 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="280bd899-5f8e-49a6-9ebc-32acff3c72e6" path="/var/lib/kubelet/pods/280bd899-5f8e-49a6-9ebc-32acff3c72e6/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.706994 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a74888-1276-4ef7-95f7-939c1df326b6" path="/var/lib/kubelet/pods/45a74888-1276-4ef7-95f7-939c1df326b6/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.707769 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f1e2e4-7888-40f5-962c-2298aaa75d60" path="/var/lib/kubelet/pods/56f1e2e4-7888-40f5-962c-2298aaa75d60/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.708196 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7472ee77-bcfd-4e60-a7e4-359076bc334a" path="/var/lib/kubelet/pods/7472ee77-bcfd-4e60-a7e4-359076bc334a/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.718868 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d" path="/var/lib/kubelet/pods/78947aa3-e8a0-4ec5-9c2c-1ffeb3e8e59d/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.720217 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90792618-1456-45fe-9249-d31ad3b1a682" path="/var/lib/kubelet/pods/90792618-1456-45fe-9249-d31ad3b1a682/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.720539 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdffl\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-kube-api-access-tdffl\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.720731 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f79ef0-54e0-45cf-a60d-2f27be25b1f6" path="/var/lib/kubelet/pods/a6f79ef0-54e0-45cf-a60d-2f27be25b1f6/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.721286 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ec2c4a-f699-4b6e-b9f4-90f83647167e" path="/var/lib/kubelet/pods/b4ec2c4a-f699-4b6e-b9f4-90f83647167e/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.724198 5049 generic.go:334] "Generic (PLEG): container finished" podID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerID="9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81" exitCode=143 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.724723 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.725047 5049 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.725073 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.725083 5049 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a923a49d-7e17-40a5-975a-9f4a39f92d51-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.725092 5049 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a923a49d-7e17-40a5-975a-9f4a39f92d51-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.725100 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.725418 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35d2b53-2aed-4405-b61e-abe411cb3b42" path="/var/lib/kubelet/pods/e35d2b53-2aed-4405-b61e-abe411cb3b42/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.726285 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee012087-89b0-49aa-bac7-4cd715e80294" path="/var/lib/kubelet/pods/ee012087-89b0-49aa-bac7-4cd715e80294/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.727146 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a5b314-aabd-4f2e-a9a4-fb2509b9697b" path="/var/lib/kubelet/pods/f3a5b314-aabd-4f2e-a9a4-fb2509b9697b/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.728015 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f44f2f88-5083-4314-ac57-54597bca9efa" path="/var/lib/kubelet/pods/f44f2f88-5083-4314-ac57-54597bca9efa/volumes" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.728198 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a923a49d-7e17-40a5-975a-9f4a39f92d51" (UID: "a923a49d-7e17-40a5-975a-9f4a39f92d51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.732934 5049 generic.go:334] "Generic (PLEG): container finished" podID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerID="7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215" exitCode=0 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.732961 5049 generic.go:334] "Generic (PLEG): container finished" podID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerID="d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789" exitCode=0 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.733039 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.759012 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492cb82e-33fb-4fc7-85e2-7d4285e5ff00","Type":"ContainerDied","Data":"166079e95268cb7e35e7bb3173c1768e058053e781153b7e92d90749146e26bf"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.759047 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f58f99c-tq7mf" event={"ID":"e55f335e-88f4-4e41-a177-0771cfd532c4","Type":"ContainerDied","Data":"9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.759064 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-g89zb"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.759083 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-csmlt"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.759096 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" event={"ID":"a923a49d-7e17-40a5-975a-9f4a39f92d51","Type":"ContainerDied","Data":"7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.759108 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6dbbddc5bc-5k4jm" event={"ID":"a923a49d-7e17-40a5-975a-9f4a39f92d51","Type":"ContainerDied","Data":"d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.759120 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.763282 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9swgr" event={"ID":"f2cc976d-73bd-4d16-a1f6-84108954384f","Type":"ContainerDied","Data":"c5076f3639b356f272b0784bd0d19621145ec99cc34ac71def9b727b5234f059"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.763338 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.765778 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-80af-account-create-update-f778m" event={"ID":"81cf45aa-76f9-41d4-9385-7796174601b0","Type":"ContainerDied","Data":"31f18906cc9adfcc26547661c15f4f8af8f35360ab4ccf0e319bf648195ec43d"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.765861 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.781509 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d95d-account-create-update-5h486" event={"ID":"98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282","Type":"ContainerDied","Data":"bfd5394d07da4ec19d5f5c766928b6146e884f7376bd2374174e5795dacba669"} Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.781588 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d95d-account-create-update-5h486" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.783502 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-config-data" (OuterVolumeSpecName: "config-data") pod "a923a49d-7e17-40a5-975a-9f4a39f92d51" (UID: "a923a49d-7e17-40a5-975a-9f4a39f92d51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.793396 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.798984 5049 scope.go:117] "RemoveContainer" containerID="7c0aed498f47898c511db9b6a1e7b505874797ec15911d29135932092dbc34ca" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.817782 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "85620b2d-c74a-4c51-8129-c747016dc357" (UID: "85620b2d-c74a-4c51-8129-c747016dc357"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.826622 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.826650 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85620b2d-c74a-4c51-8129-c747016dc357-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.826660 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.832030 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a923a49d-7e17-40a5-975a-9f4a39f92d51" (UID: "a923a49d-7e17-40a5-975a-9f4a39f92d51"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.848315 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a923a49d-7e17-40a5-975a-9f4a39f92d51" (UID: "a923a49d-7e17-40a5-975a-9f4a39f92d51"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.858244 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="de39a65a-7265-4418-a94b-f8f8f30c3807" containerName="galera" containerID="cri-o://4c24478588bff8e90f1e2d67898dd6f439331bc9e8f594f828123fbc7e460d13" gracePeriod=30 Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.927733 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:23 crc kubenswrapper[5049]: I0127 17:21:23.927766 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a923a49d-7e17-40a5-975a-9f4a39f92d51-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.060372 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.087918 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": dial tcp 10.217.0.201:3000: connect: connection refused" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.089594 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.123507 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.126193 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:21:24 crc kubenswrapper[5049]: E0127 17:21:24.135645 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362 is running failed: container process not found" containerID="c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 17:21:24 crc kubenswrapper[5049]: E0127 17:21:24.151980 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362 is running failed: container process not found" containerID="c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 17:21:24 crc kubenswrapper[5049]: E0127 17:21:24.156792 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362 is running failed: container process not found" containerID="c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 17:21:24 crc kubenswrapper[5049]: E0127 17:21:24.156849 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c9edd1d0-64dc-4c83-9149-04c772e4e517" containerName="nova-scheduler-scheduler" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.157058 5049 scope.go:117] "RemoveContainer" containerID="7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.165572 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.194278 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d95d-account-create-update-5h486"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.206312 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d95d-account-create-update-5h486"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.237149 5049 scope.go:117] "RemoveContainer" containerID="d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.245857 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-combined-ca-bundle\") pod \"d89c9402-b4c3-4180-8a61-9e63497ebb66\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246080 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-scripts\") pod \"59384c20-c0a3-4524-9ddb-407b96e8f882\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246119 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-httpd-run\") pod \"d89c9402-b4c3-4180-8a61-9e63497ebb66\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246138 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-config-data\") pod \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246157 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data-custom\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246185 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-internal-tls-certs\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246207 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-combined-ca-bundle\") pod \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246230 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-logs\") pod \"59384c20-c0a3-4524-9ddb-407b96e8f882\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246252 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"d89c9402-b4c3-4180-8a61-9e63497ebb66\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246304 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-public-tls-certs\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246319 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-scripts\") pod \"d89c9402-b4c3-4180-8a61-9e63497ebb66\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246354 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-combined-ca-bundle\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246372 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-logs\") pod \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246390 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-config-data\") pod \"d89c9402-b4c3-4180-8a61-9e63497ebb66\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246407 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-scripts\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246426 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-combined-ca-bundle\") pod \"59384c20-c0a3-4524-9ddb-407b96e8f882\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246443 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-etc-machine-id\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246474 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjs49\" (UniqueName: \"kubernetes.io/projected/d89c9402-b4c3-4180-8a61-9e63497ebb66-kube-api-access-wjs49\") pod \"d89c9402-b4c3-4180-8a61-9e63497ebb66\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246504 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246520 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-logs\") pod \"d89c9402-b4c3-4180-8a61-9e63497ebb66\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246535 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-combined-ca-bundle\") pod \"b915091f-1f89-4602-8b1f-2214883644e0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246552 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-httpd-run\") pod \"59384c20-c0a3-4524-9ddb-407b96e8f882\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246589 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-certs\") pod \"b915091f-1f89-4602-8b1f-2214883644e0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246608 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-config\") pod \"b915091f-1f89-4602-8b1f-2214883644e0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246627 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vvhx\" (UniqueName: \"kubernetes.io/projected/59384c20-c0a3-4524-9ddb-407b96e8f882-kube-api-access-7vvhx\") pod \"59384c20-c0a3-4524-9ddb-407b96e8f882\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246660 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-internal-tls-certs\") pod \"d89c9402-b4c3-4180-8a61-9e63497ebb66\" (UID: \"d89c9402-b4c3-4180-8a61-9e63497ebb66\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246690 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-logs\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246715 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kghhw\" (UniqueName: \"kubernetes.io/projected/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-kube-api-access-kghhw\") pod \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246734 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-nova-metadata-tls-certs\") pod \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\" (UID: \"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246758 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"59384c20-c0a3-4524-9ddb-407b96e8f882\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246788 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78t4t\" (UniqueName: \"kubernetes.io/projected/b915091f-1f89-4602-8b1f-2214883644e0-kube-api-access-78t4t\") pod \"b915091f-1f89-4602-8b1f-2214883644e0\" (UID: \"b915091f-1f89-4602-8b1f-2214883644e0\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246806 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjf9d\" (UniqueName: \"kubernetes.io/projected/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-kube-api-access-zjf9d\") pod \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\" (UID: \"492cb82e-33fb-4fc7-85e2-7d4285e5ff00\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246828 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-public-tls-certs\") pod \"59384c20-c0a3-4524-9ddb-407b96e8f882\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.246850 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-config-data\") pod \"59384c20-c0a3-4524-9ddb-407b96e8f882\" (UID: \"59384c20-c0a3-4524-9ddb-407b96e8f882\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.247096 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.247310 5049 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.254657 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "d89c9402-b4c3-4180-8a61-9e63497ebb66" (UID: "d89c9402-b4c3-4180-8a61-9e63497ebb66"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.255166 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-logs" (OuterVolumeSpecName: "logs") pod "59384c20-c0a3-4524-9ddb-407b96e8f882" (UID: "59384c20-c0a3-4524-9ddb-407b96e8f882"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.263101 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.264782 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "59384c20-c0a3-4524-9ddb-407b96e8f882" (UID: "59384c20-c0a3-4524-9ddb-407b96e8f882"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.265308 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-logs" (OuterVolumeSpecName: "logs") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.268059 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-logs" (OuterVolumeSpecName: "logs") pod "d89c9402-b4c3-4180-8a61-9e63497ebb66" (UID: "d89c9402-b4c3-4180-8a61-9e63497ebb66"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.268196 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d89c9402-b4c3-4180-8a61-9e63497ebb66" (UID: "d89c9402-b4c3-4180-8a61-9e63497ebb66"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.270476 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-scripts" (OuterVolumeSpecName: "scripts") pod "59384c20-c0a3-4524-9ddb-407b96e8f882" (UID: "59384c20-c0a3-4524-9ddb-407b96e8f882"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.271346 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "59384c20-c0a3-4524-9ddb-407b96e8f882" (UID: "59384c20-c0a3-4524-9ddb-407b96e8f882"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.273836 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-logs" (OuterVolumeSpecName: "logs") pod "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" (UID: "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.279303 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6dbbddc5bc-5k4jm"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.281815 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-scripts" (OuterVolumeSpecName: "scripts") pod "d89c9402-b4c3-4180-8a61-9e63497ebb66" (UID: "d89c9402-b4c3-4180-8a61-9e63497ebb66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.286707 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59384c20-c0a3-4524-9ddb-407b96e8f882-kube-api-access-7vvhx" (OuterVolumeSpecName: "kube-api-access-7vvhx") pod "59384c20-c0a3-4524-9ddb-407b96e8f882" (UID: "59384c20-c0a3-4524-9ddb-407b96e8f882"). InnerVolumeSpecName "kube-api-access-7vvhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.286777 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89c9402-b4c3-4180-8a61-9e63497ebb66-kube-api-access-wjs49" (OuterVolumeSpecName: "kube-api-access-wjs49") pod "d89c9402-b4c3-4180-8a61-9e63497ebb66" (UID: "d89c9402-b4c3-4180-8a61-9e63497ebb66"). InnerVolumeSpecName "kube-api-access-wjs49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.287208 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-kube-api-access-kghhw" (OuterVolumeSpecName: "kube-api-access-kghhw") pod "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" (UID: "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29"). InnerVolumeSpecName "kube-api-access-kghhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.293036 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6dbbddc5bc-5k4jm"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.299931 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-kube-api-access-zjf9d" (OuterVolumeSpecName: "kube-api-access-zjf9d") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "kube-api-access-zjf9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.300202 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b915091f-1f89-4602-8b1f-2214883644e0-kube-api-access-78t4t" (OuterVolumeSpecName: "kube-api-access-78t4t") pod "b915091f-1f89-4602-8b1f-2214883644e0" (UID: "b915091f-1f89-4602-8b1f-2214883644e0"). InnerVolumeSpecName "kube-api-access-78t4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.300643 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-scripts" (OuterVolumeSpecName: "scripts") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367184 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367208 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kghhw\" (UniqueName: \"kubernetes.io/projected/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-kube-api-access-kghhw\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367229 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367241 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78t4t\" (UniqueName: \"kubernetes.io/projected/b915091f-1f89-4602-8b1f-2214883644e0-kube-api-access-78t4t\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367252 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjf9d\" (UniqueName: \"kubernetes.io/projected/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-kube-api-access-zjf9d\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367260 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367269 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367278 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367286 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367299 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367307 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367316 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367324 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367333 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjs49\" (UniqueName: \"kubernetes.io/projected/d89c9402-b4c3-4180-8a61-9e63497ebb66-kube-api-access-wjs49\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367342 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d89c9402-b4c3-4180-8a61-9e63497ebb66-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367350 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/59384c20-c0a3-4524-9ddb-407b96e8f882-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.367358 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vvhx\" (UniqueName: \"kubernetes.io/projected/59384c20-c0a3-4524-9ddb-407b96e8f882-kube-api-access-7vvhx\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.375188 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" (UID: "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.380462 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d89c9402-b4c3-4180-8a61-9e63497ebb66" (UID: "d89c9402-b4c3-4180-8a61-9e63497ebb66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.414687 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "b915091f-1f89-4602-8b1f-2214883644e0" (UID: "b915091f-1f89-4602-8b1f-2214883644e0"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.424548 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.436228 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-config-data" (OuterVolumeSpecName: "config-data") pod "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" (UID: "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.443829 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.455922 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.458489 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "b915091f-1f89-4602-8b1f-2214883644e0" (UID: "b915091f-1f89-4602-8b1f-2214883644e0"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.467277 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" (UID: "65eb2d0b-ab1a-4a97-afdc-73592ac6cb29"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474321 5049 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474351 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474361 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474370 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474379 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474389 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474399 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474452 5049 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.474462 5049 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.477555 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d89c9402-b4c3-4180-8a61-9e63497ebb66" (UID: "d89c9402-b4c3-4180-8a61-9e63497ebb66"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.484828 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data" (OuterVolumeSpecName: "config-data") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.491510 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "59384c20-c0a3-4524-9ddb-407b96e8f882" (UID: "59384c20-c0a3-4524-9ddb-407b96e8f882"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.504136 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-csmlt"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.510873 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59384c20-c0a3-4524-9ddb-407b96e8f882" (UID: "59384c20-c0a3-4524-9ddb-407b96e8f882"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.519750 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-config-data" (OuterVolumeSpecName: "config-data") pod "59384c20-c0a3-4524-9ddb-407b96e8f882" (UID: "59384c20-c0a3-4524-9ddb-407b96e8f882"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: E0127 17:21:24.520990 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 17:21:24 crc kubenswrapper[5049]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 27 17:21:24 crc kubenswrapper[5049]: Jan 27 17:21:24 crc kubenswrapper[5049]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 27 17:21:24 crc kubenswrapper[5049]: Jan 27 17:21:24 crc kubenswrapper[5049]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 27 17:21:24 crc kubenswrapper[5049]: Jan 27 17:21:24 crc kubenswrapper[5049]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 27 17:21:24 crc kubenswrapper[5049]: Jan 27 17:21:24 crc kubenswrapper[5049]: if [ -n "" ]; then Jan 27 17:21:24 crc kubenswrapper[5049]: GRANT_DATABASE="" Jan 27 17:21:24 crc kubenswrapper[5049]: else Jan 27 17:21:24 crc kubenswrapper[5049]: GRANT_DATABASE="*" Jan 27 17:21:24 crc kubenswrapper[5049]: fi Jan 27 17:21:24 crc kubenswrapper[5049]: Jan 27 17:21:24 crc kubenswrapper[5049]: # going for maximum compatibility here: Jan 27 17:21:24 crc kubenswrapper[5049]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 27 17:21:24 crc kubenswrapper[5049]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 27 17:21:24 crc kubenswrapper[5049]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 27 17:21:24 crc kubenswrapper[5049]: # support updates Jan 27 17:21:24 crc kubenswrapper[5049]: Jan 27 17:21:24 crc kubenswrapper[5049]: $MYSQL_CMD < logger="UnhandledError" Jan 27 17:21:24 crc kubenswrapper[5049]: E0127 17:21:24.522386 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-csmlt" podUID="41e50a09-5c4d-4898-bdd7-16d85fa7c90d" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.525123 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-config-data" (OuterVolumeSpecName: "config-data") pod "d89c9402-b4c3-4180-8a61-9e63497ebb66" (UID: "d89c9402-b4c3-4180-8a61-9e63497ebb66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.527890 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b915091f-1f89-4602-8b1f-2214883644e0" (UID: "b915091f-1f89-4602-8b1f-2214883644e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.538615 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.548590 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "492cb82e-33fb-4fc7-85e2-7d4285e5ff00" (UID: "492cb82e-33fb-4fc7-85e2-7d4285e5ff00"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.558328 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.561811 5049 scope.go:117] "RemoveContainer" containerID="7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215" Jan 27 17:21:24 crc kubenswrapper[5049]: E0127 17:21:24.562115 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215\": container with ID starting with 7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215 not found: ID does not exist" containerID="7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.562143 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215"} err="failed to get container status \"7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215\": rpc error: code = NotFound desc = could not find container \"7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215\": container with ID starting with 7726471af8c08f6c10586088666d4b505ea63dc887de8daf8cc743ea2173f215 not found: ID does not exist" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.562164 5049 scope.go:117] "RemoveContainer" containerID="d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789" Jan 27 17:21:24 crc kubenswrapper[5049]: E0127 17:21:24.562808 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789\": container with ID starting with d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789 not found: ID does not exist" containerID="d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.562835 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789"} err="failed to get container status \"d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789\": rpc error: code = NotFound desc = could not find container \"d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789\": container with ID starting with d21f3a14ae01f4df5aae43325cf7e59b6173d081cb638021f0c8cac28f33e789 not found: ID does not exist" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.564234 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.568418 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576471 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576497 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576506 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576516 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b915091f-1f89-4602-8b1f-2214883644e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576524 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/492cb82e-33fb-4fc7-85e2-7d4285e5ff00-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576533 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d89c9402-b4c3-4180-8a61-9e63497ebb66-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576541 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576549 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.576557 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59384c20-c0a3-4524-9ddb-407b96e8f882-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.677893 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-954hb\" (UniqueName: \"kubernetes.io/projected/fd8752fa-c3a1-4eba-91dc-6af200eb8168-kube-api-access-954hb\") pod \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678158 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd8752fa-c3a1-4eba-91dc-6af200eb8168-logs\") pod \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678188 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-combined-ca-bundle\") pod \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678217 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data-custom\") pod \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678250 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data-custom\") pod \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678308 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-public-tls-certs\") pod \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678385 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-internal-tls-certs\") pod \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678449 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25ad8919-34a1-4d3c-8f82-a8902bc857ff-logs\") pod \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678490 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-config-data\") pod \"c9edd1d0-64dc-4c83-9149-04c772e4e517\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678514 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-combined-ca-bundle\") pod \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678548 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffzjv\" (UniqueName: \"kubernetes.io/projected/25ad8919-34a1-4d3c-8f82-a8902bc857ff-kube-api-access-ffzjv\") pod \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678608 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgtmc\" (UniqueName: \"kubernetes.io/projected/c9edd1d0-64dc-4c83-9149-04c772e4e517-kube-api-access-wgtmc\") pod \"c9edd1d0-64dc-4c83-9149-04c772e4e517\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678653 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-combined-ca-bundle\") pod \"c9edd1d0-64dc-4c83-9149-04c772e4e517\" (UID: \"c9edd1d0-64dc-4c83-9149-04c772e4e517\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678702 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data\") pod \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\" (UID: \"fd8752fa-c3a1-4eba-91dc-6af200eb8168\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678754 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data\") pod \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\" (UID: \"25ad8919-34a1-4d3c-8f82-a8902bc857ff\") " Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.678881 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd8752fa-c3a1-4eba-91dc-6af200eb8168-logs" (OuterVolumeSpecName: "logs") pod "fd8752fa-c3a1-4eba-91dc-6af200eb8168" (UID: "fd8752fa-c3a1-4eba-91dc-6af200eb8168"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.679208 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd8752fa-c3a1-4eba-91dc-6af200eb8168-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.681031 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8752fa-c3a1-4eba-91dc-6af200eb8168-kube-api-access-954hb" (OuterVolumeSpecName: "kube-api-access-954hb") pod "fd8752fa-c3a1-4eba-91dc-6af200eb8168" (UID: "fd8752fa-c3a1-4eba-91dc-6af200eb8168"). InnerVolumeSpecName "kube-api-access-954hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.682101 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fd8752fa-c3a1-4eba-91dc-6af200eb8168" (UID: "fd8752fa-c3a1-4eba-91dc-6af200eb8168"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.682586 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25ad8919-34a1-4d3c-8f82-a8902bc857ff-logs" (OuterVolumeSpecName: "logs") pod "25ad8919-34a1-4d3c-8f82-a8902bc857ff" (UID: "25ad8919-34a1-4d3c-8f82-a8902bc857ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.685308 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ad8919-34a1-4d3c-8f82-a8902bc857ff-kube-api-access-ffzjv" (OuterVolumeSpecName: "kube-api-access-ffzjv") pod "25ad8919-34a1-4d3c-8f82-a8902bc857ff" (UID: "25ad8919-34a1-4d3c-8f82-a8902bc857ff"). InnerVolumeSpecName "kube-api-access-ffzjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.686793 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "25ad8919-34a1-4d3c-8f82-a8902bc857ff" (UID: "25ad8919-34a1-4d3c-8f82-a8902bc857ff"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.690777 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9edd1d0-64dc-4c83-9149-04c772e4e517-kube-api-access-wgtmc" (OuterVolumeSpecName: "kube-api-access-wgtmc") pod "c9edd1d0-64dc-4c83-9149-04c772e4e517" (UID: "c9edd1d0-64dc-4c83-9149-04c772e4e517"). InnerVolumeSpecName "kube-api-access-wgtmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.740037 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-config-data" (OuterVolumeSpecName: "config-data") pod "c9edd1d0-64dc-4c83-9149-04c772e4e517" (UID: "c9edd1d0-64dc-4c83-9149-04c772e4e517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.743533 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25ad8919-34a1-4d3c-8f82-a8902bc857ff" (UID: "25ad8919-34a1-4d3c-8f82-a8902bc857ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.746207 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9edd1d0-64dc-4c83-9149-04c772e4e517" (UID: "c9edd1d0-64dc-4c83-9149-04c772e4e517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.747296 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd8752fa-c3a1-4eba-91dc-6af200eb8168" (UID: "fd8752fa-c3a1-4eba-91dc-6af200eb8168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.770369 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data" (OuterVolumeSpecName: "config-data") pod "fd8752fa-c3a1-4eba-91dc-6af200eb8168" (UID: "fd8752fa-c3a1-4eba-91dc-6af200eb8168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.772603 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data" (OuterVolumeSpecName: "config-data") pod "25ad8919-34a1-4d3c-8f82-a8902bc857ff" (UID: "25ad8919-34a1-4d3c-8f82-a8902bc857ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.779394 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fd8752fa-c3a1-4eba-91dc-6af200eb8168" (UID: "fd8752fa-c3a1-4eba-91dc-6af200eb8168"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780550 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780575 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780588 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780601 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780613 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25ad8919-34a1-4d3c-8f82-a8902bc857ff-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780625 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780636 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780647 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffzjv\" (UniqueName: \"kubernetes.io/projected/25ad8919-34a1-4d3c-8f82-a8902bc857ff-kube-api-access-ffzjv\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780661 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgtmc\" (UniqueName: \"kubernetes.io/projected/c9edd1d0-64dc-4c83-9149-04c772e4e517-kube-api-access-wgtmc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780763 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9edd1d0-64dc-4c83-9149-04c772e4e517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780776 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780790 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ad8919-34a1-4d3c-8f82-a8902bc857ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.780801 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-954hb\" (UniqueName: \"kubernetes.io/projected/fd8752fa-c3a1-4eba-91dc-6af200eb8168-kube-api-access-954hb\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.784315 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd8752fa-c3a1-4eba-91dc-6af200eb8168" (UID: "fd8752fa-c3a1-4eba-91dc-6af200eb8168"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.791249 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c4bd57ddb-fz2dp" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.795253 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"59384c20-c0a3-4524-9ddb-407b96e8f882","Type":"ContainerDied","Data":"b52d4e3eed508ac552122ca55a3dd4a2f8e5b9b1845e3a15a51086f1ba5ef723"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.795303 5049 scope.go:117] "RemoveContainer" containerID="d944265d72bf3afb4b6b73f0c7c83289738cd7f4ed8517272ac5b673ffa17c8f" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.800281 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.800853 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-csmlt" event={"ID":"41e50a09-5c4d-4898-bdd7-16d85fa7c90d","Type":"ContainerStarted","Data":"05340a53f78d182cd31a82ff9ddb887d00cfae8463ad6466c7b4eac151b36793"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.811767 5049 generic.go:334] "Generic (PLEG): container finished" podID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerID="d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df" exitCode=0 Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.811875 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c874955f4-txmc8" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.811881 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c874955f4-txmc8" event={"ID":"fd8752fa-c3a1-4eba-91dc-6af200eb8168","Type":"ContainerDied","Data":"d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.811909 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c874955f4-txmc8" event={"ID":"fd8752fa-c3a1-4eba-91dc-6af200eb8168","Type":"ContainerDied","Data":"e439646be44da8ade07bfbde5536e4e5d953c83d527d70a6d3581887b1bd1073"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.819311 5049 generic.go:334] "Generic (PLEG): container finished" podID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerID="6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca" exitCode=0 Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.819556 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.819828 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" event={"ID":"25ad8919-34a1-4d3c-8f82-a8902bc857ff","Type":"ContainerDied","Data":"6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.819913 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5d9fdfc85c-bpzmb" event={"ID":"25ad8919-34a1-4d3c-8f82-a8902bc857ff","Type":"ContainerDied","Data":"79b865e7beb8089343e65f2cf1b89a0a0a1e891828529073ad42d44609f0433b"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.833180 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.833201 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d89c9402-b4c3-4180-8a61-9e63497ebb66","Type":"ContainerDied","Data":"3795aa7b39ed977e0043cb4f820368620371658b97c32d3b6d2086d89acb44e8"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.836242 5049 generic.go:334] "Generic (PLEG): container finished" podID="c9edd1d0-64dc-4c83-9149-04c772e4e517" containerID="c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362" exitCode=0 Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.836298 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9edd1d0-64dc-4c83-9149-04c772e4e517","Type":"ContainerDied","Data":"c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.836318 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9edd1d0-64dc-4c83-9149-04c772e4e517","Type":"ContainerDied","Data":"cbb55c61c4738bd07007615aba68d952479e48cb7685479dfa3aa7b3856bb820"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.836367 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.842646 5049 generic.go:334] "Generic (PLEG): container finished" podID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerID="a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a" exitCode=0 Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.842710 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29","Type":"ContainerDied","Data":"a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.842728 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"65eb2d0b-ab1a-4a97-afdc-73592ac6cb29","Type":"ContainerDied","Data":"b53278d8f9c6efdb8556caa5ee12eae569f1fa47705673087d97b645d1898d41"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.843656 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.850857 5049 generic.go:334] "Generic (PLEG): container finished" podID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerID="699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9" exitCode=0 Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.850887 5049 generic.go:334] "Generic (PLEG): container finished" podID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerID="0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69" exitCode=0 Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.850936 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerDied","Data":"699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.850962 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerDied","Data":"0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.853405 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c4bd57ddb-fz2dp"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.856023 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"492cb82e-33fb-4fc7-85e2-7d4285e5ff00","Type":"ContainerDied","Data":"290e2863175e2ea7918f790a1950d0f1dc7fa78c1e79d5e8a5400747209cb336"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.856057 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.857099 5049 scope.go:117] "RemoveContainer" containerID="2cade812917319aaec34ab2b32477c1d71dca9c03ae47024b7ad8adb5f1b00d0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.860373 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6c4bd57ddb-fz2dp"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.862869 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b915091f-1f89-4602-8b1f-2214883644e0","Type":"ContainerDied","Data":"a8b03b5e811b51adb351ce218a471482ed94ea5e53b0a5591d871f7870a3bcc1"} Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.862949 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.888129 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd8752fa-c3a1-4eba-91dc-6af200eb8168-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.896696 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c874955f4-txmc8"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.904349 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6c874955f4-txmc8"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.922915 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5d9fdfc85c-bpzmb"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.924739 5049 scope.go:117] "RemoveContainer" containerID="d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.933319 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5d9fdfc85c-bpzmb"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.953415 5049 scope.go:117] "RemoveContainer" containerID="35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9" Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.956991 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:21:24 crc kubenswrapper[5049]: I0127 17:21:24.968141 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:24.981757 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:24.990124 5049 scope.go:117] "RemoveContainer" containerID="d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:24.990577 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df\": container with ID starting with d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df not found: ID does not exist" containerID="d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:24.990598 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df"} err="failed to get container status \"d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df\": rpc error: code = NotFound desc = could not find container \"d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df\": container with ID starting with d59eba8208304f410983b2c55e7914a2356771226c32f5c318d3bab3cadfc4df not found: ID does not exist" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:24.990620 5049 scope.go:117] "RemoveContainer" containerID="35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:24.991161 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9\": container with ID starting with 35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9 not found: ID does not exist" containerID="35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:24.991178 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9"} err="failed to get container status \"35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9\": rpc error: code = NotFound desc = could not find container \"35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9\": container with ID starting with 35bd66078657b7d73fc268a9451cde269684cf371f822f4a1fccb3ebe53da3c9 not found: ID does not exist" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:24.991190 5049 scope.go:117] "RemoveContainer" containerID="6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:24.992902 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.005633 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.018431 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.020127 5049 scope.go:117] "RemoveContainer" containerID="88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.024470 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.030119 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.071426 5049 scope.go:117] "RemoveContainer" containerID="6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.072607 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca\": container with ID starting with 6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca not found: ID does not exist" containerID="6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.072684 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca"} err="failed to get container status \"6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca\": rpc error: code = NotFound desc = could not find container \"6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca\": container with ID starting with 6d80fe5a8e0b9fc37106192401722c045ef256250fc4655a6cdb8a26abe66cca not found: ID does not exist" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.072717 5049 scope.go:117] "RemoveContainer" containerID="88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.077329 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1\": container with ID starting with 88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1 not found: ID does not exist" containerID="88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.077378 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1"} err="failed to get container status \"88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1\": rpc error: code = NotFound desc = could not find container \"88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1\": container with ID starting with 88a4122cf8632a187f41681a10f1321b9f2c19ffcd4f009a2731da082771c3b1 not found: ID does not exist" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.077407 5049 scope.go:117] "RemoveContainer" containerID="5054a75f289d476269fdd4cec1f526a79442c1e4b02d766eec63bd20c154c9f8" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.088552 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.112904 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.118657 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.123169 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.127733 5049 scope.go:117] "RemoveContainer" containerID="43c820e42c8a751a91420a3b6d5e21201fc9f6a6613e57796cab346cad30e3d9" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.135524 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.135895 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.136065 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.136086 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.136836 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.138096 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.138992 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.139014 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.148453 5049 scope.go:117] "RemoveContainer" containerID="c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.169889 5049 scope.go:117] "RemoveContainer" containerID="c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.170321 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362\": container with ID starting with c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362 not found: ID does not exist" containerID="c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.170352 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362"} err="failed to get container status \"c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362\": rpc error: code = NotFound desc = could not find container \"c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362\": container with ID starting with c8df867495cbed41dcfcd5174e56c6a5552a36f8ff35c6aaa9b32c64025ba362 not found: ID does not exist" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.170375 5049 scope.go:117] "RemoveContainer" containerID="a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.192274 5049 scope.go:117] "RemoveContainer" containerID="1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.237118 5049 scope.go:117] "RemoveContainer" containerID="a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.237471 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a\": container with ID starting with a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a not found: ID does not exist" containerID="a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.237519 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a"} err="failed to get container status \"a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a\": rpc error: code = NotFound desc = could not find container \"a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a\": container with ID starting with a626eed524f20c0031f36e1a58fe0bc431e5f39289e2443f479f2a4aef39497a not found: ID does not exist" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.237553 5049 scope.go:117] "RemoveContainer" containerID="1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.237882 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2\": container with ID starting with 1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2 not found: ID does not exist" containerID="1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.237908 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2"} err="failed to get container status \"1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2\": rpc error: code = NotFound desc = could not find container \"1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2\": container with ID starting with 1c41eff4d4660fe08786a326daa886f989924f5e641181a59a3f5d76f30bfad2 not found: ID does not exist" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.237927 5049 scope.go:117] "RemoveContainer" containerID="166079e95268cb7e35e7bb3173c1768e058053e781153b7e92d90749146e26bf" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.320741 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.354774 5049 scope.go:117] "RemoveContainer" containerID="3113dcce28048dab388fa9369937d0bd0a1fc6c1ae5f9d46acfb897247e15c0d" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.388001 5049 scope.go:117] "RemoveContainer" containerID="ff5438e2bba7d976fe7a35950c7d8f3e8815181c6b08e323c26c90c5eef3ef12" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.417302 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-operator-scripts\") pod \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\" (UID: \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\") " Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.417392 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krzx7\" (UniqueName: \"kubernetes.io/projected/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-kube-api-access-krzx7\") pod \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\" (UID: \"41e50a09-5c4d-4898-bdd7-16d85fa7c90d\") " Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.417845 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "41e50a09-5c4d-4898-bdd7-16d85fa7c90d" (UID: "41e50a09-5c4d-4898-bdd7-16d85fa7c90d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.420982 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 27 17:21:25 crc kubenswrapper[5049]: E0127 17:21:25.421076 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data podName:62ffcfe9-3e93-48ee-8d03-9b653d1bfede nodeName:}" failed. No retries permitted until 2026-01-27 17:21:33.421053058 +0000 UTC m=+1468.520026607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data") pod "rabbitmq-server-0" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede") : configmap "rabbitmq-config-data" not found Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.421961 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-kube-api-access-krzx7" (OuterVolumeSpecName: "kube-api-access-krzx7") pod "41e50a09-5c4d-4898-bdd7-16d85fa7c90d" (UID: "41e50a09-5c4d-4898-bdd7-16d85fa7c90d"). InnerVolumeSpecName "kube-api-access-krzx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.423902 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.423928 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krzx7\" (UniqueName: \"kubernetes.io/projected/41e50a09-5c4d-4898-bdd7-16d85fa7c90d-kube-api-access-krzx7\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.482621 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.482989 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="28327cb6-87a9-4b24-b8fb-f43c33076b1b" containerName="memcached" containerID="cri-o://1aad04186f3f290c52f9e3c6f44246f78807c70f72c854c4cfa401d9f8b67ba3" gracePeriod=30 Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.493127 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d89x5"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.503486 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.503720 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="22dc9694-6c5e-4ac3-99e3-910dac92573a" containerName="nova-cell1-conductor-conductor" containerID="cri-o://abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38" gracePeriod=30 Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.511742 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-d89x5"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.519234 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.519503 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="6c4b4464-1c98-412b-96cf-235908a4eaf6" containerName="nova-cell0-conductor-conductor" containerID="cri-o://7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba" gracePeriod=30 Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.522082 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jkb97"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.531309 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-jkb97"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.664626 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19fc2237-8102-4dce-ba61-c6466948289d" path="/var/lib/kubelet/pods/19fc2237-8102-4dce-ba61-c6466948289d/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.665506 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" path="/var/lib/kubelet/pods/25ad8919-34a1-4d3c-8f82-a8902bc857ff/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.666059 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" path="/var/lib/kubelet/pods/492cb82e-33fb-4fc7-85e2-7d4285e5ff00/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.667091 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e27cd2f-9407-4444-9914-9892b1e41d13" path="/var/lib/kubelet/pods/4e27cd2f-9407-4444-9914-9892b1e41d13/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.667543 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f65153-193c-49dd-91d3-b7eecb30c74b" path="/var/lib/kubelet/pods/54f65153-193c-49dd-91d3-b7eecb30c74b/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.668439 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" path="/var/lib/kubelet/pods/59384c20-c0a3-4524-9ddb-407b96e8f882/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.669024 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" path="/var/lib/kubelet/pods/65eb2d0b-ab1a-4a97-afdc-73592ac6cb29/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.669982 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85620b2d-c74a-4c51-8129-c747016dc357" path="/var/lib/kubelet/pods/85620b2d-c74a-4c51-8129-c747016dc357/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.670582 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95574d5f-6872-4ff3-a7a4-44a960bb46f0" path="/var/lib/kubelet/pods/95574d5f-6872-4ff3-a7a4-44a960bb46f0/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.671110 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282" path="/var/lib/kubelet/pods/98ce5a8a-7cc8-49c8-9d8a-9ac9ef8d3282/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.671911 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" path="/var/lib/kubelet/pods/a923a49d-7e17-40a5-975a-9f4a39f92d51/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.672418 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b915091f-1f89-4602-8b1f-2214883644e0" path="/var/lib/kubelet/pods/b915091f-1f89-4602-8b1f-2214883644e0/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.672876 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9edd1d0-64dc-4c83-9149-04c772e4e517" path="/var/lib/kubelet/pods/c9edd1d0-64dc-4c83-9149-04c772e4e517/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.673771 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" path="/var/lib/kubelet/pods/d89c9402-b4c3-4180-8a61-9e63497ebb66/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.674338 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" path="/var/lib/kubelet/pods/fd8752fa-c3a1-4eba-91dc-6af200eb8168/volumes" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.765618 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pv2qx" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.799754 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pv2qx" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" probeResult="failure" output=< Jan 27 17:21:25 crc kubenswrapper[5049]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 27 17:21:25 crc kubenswrapper[5049]: > Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.875401 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-csmlt" event={"ID":"41e50a09-5c4d-4898-bdd7-16d85fa7c90d","Type":"ContainerDied","Data":"05340a53f78d182cd31a82ff9ddb887d00cfae8463ad6466c7b4eac151b36793"} Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.875440 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-csmlt" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.879433 5049 generic.go:334] "Generic (PLEG): container finished" podID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerID="b7a20b9de92877a9ab934476ffa27a2a939c104c0bb643b1807fc727b4746d30" exitCode=0 Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.879487 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"294e84c0-d49f-4e45-87d5-085c7accf51e","Type":"ContainerDied","Data":"b7a20b9de92877a9ab934476ffa27a2a939c104c0bb643b1807fc727b4746d30"} Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.893351 5049 generic.go:334] "Generic (PLEG): container finished" podID="de39a65a-7265-4418-a94b-f8f8f30c3807" containerID="4c24478588bff8e90f1e2d67898dd6f439331bc9e8f594f828123fbc7e460d13" exitCode=0 Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.893397 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de39a65a-7265-4418-a94b-f8f8f30c3807","Type":"ContainerDied","Data":"4c24478588bff8e90f1e2d67898dd6f439331bc9e8f594f828123fbc7e460d13"} Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.910076 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_051db122-80f6-47fc-8d5c-5244d92e593d/ovn-northd/0.log" Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.910245 5049 generic.go:334] "Generic (PLEG): container finished" podID="051db122-80f6-47fc-8d5c-5244d92e593d" containerID="ffdb84acf31942996807c242b98114c9c8d67e2eeaa568117f878ad3675f41d8" exitCode=139 Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.910275 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"051db122-80f6-47fc-8d5c-5244d92e593d","Type":"ContainerDied","Data":"ffdb84acf31942996807c242b98114c9c8d67e2eeaa568117f878ad3675f41d8"} Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.924259 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-csmlt"] Jan 27 17:21:25 crc kubenswrapper[5049]: I0127 17:21:25.933068 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-csmlt"] Jan 27 17:21:26 crc kubenswrapper[5049]: E0127 17:21:26.139272 5049 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:26 crc kubenswrapper[5049]: E0127 17:21:26.139348 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data podName:dbb24b4b-dfbd-431f-8244-098c40f7c24f nodeName:}" failed. No retries permitted until 2026-01-27 17:21:34.13933015 +0000 UTC m=+1469.238303699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data") pod "rabbitmq-cell1-server-0" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f") : configmap "rabbitmq-cell1-config-data" not found Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.412745 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.419197 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_051db122-80f6-47fc-8d5c-5244d92e593d/ovn-northd/0.log" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.419255 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.426202 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.505491 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555074 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-galera-tls-certs\") pod \"de39a65a-7265-4418-a94b-f8f8f30c3807\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555123 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-combined-ca-bundle\") pod \"051db122-80f6-47fc-8d5c-5244d92e593d\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555173 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-public-tls-certs\") pod \"294e84c0-d49f-4e45-87d5-085c7accf51e\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555204 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-kolla-config\") pod \"de39a65a-7265-4418-a94b-f8f8f30c3807\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555219 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"de39a65a-7265-4418-a94b-f8f8f30c3807\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555281 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/294e84c0-d49f-4e45-87d5-085c7accf51e-logs\") pod \"294e84c0-d49f-4e45-87d5-085c7accf51e\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555321 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clqnp\" (UniqueName: \"kubernetes.io/projected/294e84c0-d49f-4e45-87d5-085c7accf51e-kube-api-access-clqnp\") pod \"294e84c0-d49f-4e45-87d5-085c7accf51e\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555343 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-internal-tls-certs\") pod \"294e84c0-d49f-4e45-87d5-085c7accf51e\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555392 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-combined-ca-bundle\") pod \"294e84c0-d49f-4e45-87d5-085c7accf51e\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555412 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-combined-ca-bundle\") pod \"de39a65a-7265-4418-a94b-f8f8f30c3807\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555435 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-scripts\") pod \"051db122-80f6-47fc-8d5c-5244d92e593d\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555460 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6pgt\" (UniqueName: \"kubernetes.io/projected/051db122-80f6-47fc-8d5c-5244d92e593d-kube-api-access-g6pgt\") pod \"051db122-80f6-47fc-8d5c-5244d92e593d\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555527 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-config\") pod \"051db122-80f6-47fc-8d5c-5244d92e593d\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555572 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-rundir\") pod \"051db122-80f6-47fc-8d5c-5244d92e593d\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555597 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-default\") pod \"de39a65a-7265-4418-a94b-f8f8f30c3807\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555619 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-config-data\") pod \"294e84c0-d49f-4e45-87d5-085c7accf51e\" (UID: \"294e84c0-d49f-4e45-87d5-085c7accf51e\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555648 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-generated\") pod \"de39a65a-7265-4418-a94b-f8f8f30c3807\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555692 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-operator-scripts\") pod \"de39a65a-7265-4418-a94b-f8f8f30c3807\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555715 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-northd-tls-certs\") pod \"051db122-80f6-47fc-8d5c-5244d92e593d\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555737 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-metrics-certs-tls-certs\") pod \"051db122-80f6-47fc-8d5c-5244d92e593d\" (UID: \"051db122-80f6-47fc-8d5c-5244d92e593d\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.555759 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24ls6\" (UniqueName: \"kubernetes.io/projected/de39a65a-7265-4418-a94b-f8f8f30c3807-kube-api-access-24ls6\") pod \"de39a65a-7265-4418-a94b-f8f8f30c3807\" (UID: \"de39a65a-7265-4418-a94b-f8f8f30c3807\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.556801 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/294e84c0-d49f-4e45-87d5-085c7accf51e-logs" (OuterVolumeSpecName: "logs") pod "294e84c0-d49f-4e45-87d5-085c7accf51e" (UID: "294e84c0-d49f-4e45-87d5-085c7accf51e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.556819 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "de39a65a-7265-4418-a94b-f8f8f30c3807" (UID: "de39a65a-7265-4418-a94b-f8f8f30c3807"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.556870 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "de39a65a-7265-4418-a94b-f8f8f30c3807" (UID: "de39a65a-7265-4418-a94b-f8f8f30c3807"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.556930 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de39a65a-7265-4418-a94b-f8f8f30c3807" (UID: "de39a65a-7265-4418-a94b-f8f8f30c3807"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.557243 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-scripts" (OuterVolumeSpecName: "scripts") pod "051db122-80f6-47fc-8d5c-5244d92e593d" (UID: "051db122-80f6-47fc-8d5c-5244d92e593d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.560753 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "de39a65a-7265-4418-a94b-f8f8f30c3807" (UID: "de39a65a-7265-4418-a94b-f8f8f30c3807"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.561178 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-config" (OuterVolumeSpecName: "config") pod "051db122-80f6-47fc-8d5c-5244d92e593d" (UID: "051db122-80f6-47fc-8d5c-5244d92e593d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.561353 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "051db122-80f6-47fc-8d5c-5244d92e593d" (UID: "051db122-80f6-47fc-8d5c-5244d92e593d"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.562438 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294e84c0-d49f-4e45-87d5-085c7accf51e-kube-api-access-clqnp" (OuterVolumeSpecName: "kube-api-access-clqnp") pod "294e84c0-d49f-4e45-87d5-085c7accf51e" (UID: "294e84c0-d49f-4e45-87d5-085c7accf51e"). InnerVolumeSpecName "kube-api-access-clqnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.563830 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051db122-80f6-47fc-8d5c-5244d92e593d-kube-api-access-g6pgt" (OuterVolumeSpecName: "kube-api-access-g6pgt") pod "051db122-80f6-47fc-8d5c-5244d92e593d" (UID: "051db122-80f6-47fc-8d5c-5244d92e593d"). InnerVolumeSpecName "kube-api-access-g6pgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.565591 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de39a65a-7265-4418-a94b-f8f8f30c3807-kube-api-access-24ls6" (OuterVolumeSpecName: "kube-api-access-24ls6") pod "de39a65a-7265-4418-a94b-f8f8f30c3807" (UID: "de39a65a-7265-4418-a94b-f8f8f30c3807"). InnerVolumeSpecName "kube-api-access-24ls6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.579964 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "de39a65a-7265-4418-a94b-f8f8f30c3807" (UID: "de39a65a-7265-4418-a94b-f8f8f30c3807"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.582999 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de39a65a-7265-4418-a94b-f8f8f30c3807" (UID: "de39a65a-7265-4418-a94b-f8f8f30c3807"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.592258 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "294e84c0-d49f-4e45-87d5-085c7accf51e" (UID: "294e84c0-d49f-4e45-87d5-085c7accf51e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.604845 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-config-data" (OuterVolumeSpecName: "config-data") pod "294e84c0-d49f-4e45-87d5-085c7accf51e" (UID: "294e84c0-d49f-4e45-87d5-085c7accf51e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.634950 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "de39a65a-7265-4418-a94b-f8f8f30c3807" (UID: "de39a65a-7265-4418-a94b-f8f8f30c3807"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.654964 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "294e84c0-d49f-4e45-87d5-085c7accf51e" (UID: "294e84c0-d49f-4e45-87d5-085c7accf51e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.655748 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "051db122-80f6-47fc-8d5c-5244d92e593d" (UID: "051db122-80f6-47fc-8d5c-5244d92e593d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.655766 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "294e84c0-d49f-4e45-87d5-085c7accf51e" (UID: "294e84c0-d49f-4e45-87d5-085c7accf51e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657042 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-plugins\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657128 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-server-conf\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657179 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cm74\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-kube-api-access-4cm74\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657270 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-pod-info\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657329 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-plugins-conf\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657361 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-erlang-cookie-secret\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657427 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-erlang-cookie\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657467 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-confd\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657566 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657606 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-tls\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.657656 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\" (UID: \"62ffcfe9-3e93-48ee-8d03-9b653d1bfede\") " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658069 5049 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658094 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658112 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658125 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/de39a65a-7265-4418-a94b-f8f8f30c3807-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658137 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658150 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24ls6\" (UniqueName: \"kubernetes.io/projected/de39a65a-7265-4418-a94b-f8f8f30c3807-kube-api-access-24ls6\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658164 5049 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658176 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658187 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658198 5049 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/de39a65a-7265-4418-a94b-f8f8f30c3807-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658224 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658237 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/294e84c0-d49f-4e45-87d5-085c7accf51e-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658249 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clqnp\" (UniqueName: \"kubernetes.io/projected/294e84c0-d49f-4e45-87d5-085c7accf51e-kube-api-access-clqnp\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658262 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658274 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294e84c0-d49f-4e45-87d5-085c7accf51e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658286 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de39a65a-7265-4418-a94b-f8f8f30c3807-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658298 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658309 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6pgt\" (UniqueName: \"kubernetes.io/projected/051db122-80f6-47fc-8d5c-5244d92e593d-kube-api-access-g6pgt\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658322 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/051db122-80f6-47fc-8d5c-5244d92e593d-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658091 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658549 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.658791 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.661688 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.668782 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.674007 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "051db122-80f6-47fc-8d5c-5244d92e593d" (UID: "051db122-80f6-47fc-8d5c-5244d92e593d"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.676294 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.687019 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.693860 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-pod-info" (OuterVolumeSpecName: "pod-info") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.693919 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-kube-api-access-4cm74" (OuterVolumeSpecName: "kube-api-access-4cm74") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "kube-api-access-4cm74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.694011 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "051db122-80f6-47fc-8d5c-5244d92e593d" (UID: "051db122-80f6-47fc-8d5c-5244d92e593d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.694542 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data" (OuterVolumeSpecName: "config-data") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.716933 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-server-conf" (OuterVolumeSpecName: "server-conf") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759808 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759857 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759873 5049 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759883 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cm74\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-kube-api-access-4cm74\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759893 5049 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759901 5049 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759912 5049 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759924 5049 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759934 5049 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/051db122-80f6-47fc-8d5c-5244d92e593d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759944 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759955 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759965 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.759974 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.779876 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.791067 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "62ffcfe9-3e93-48ee-8d03-9b653d1bfede" (UID: "62ffcfe9-3e93-48ee-8d03-9b653d1bfede"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.861948 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/62ffcfe9-3e93-48ee-8d03-9b653d1bfede-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.861984 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.902138 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.932484 5049 generic.go:334] "Generic (PLEG): container finished" podID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerID="3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462" exitCode=0 Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.932573 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbb24b4b-dfbd-431f-8244-098c40f7c24f","Type":"ContainerDied","Data":"3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.932626 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dbb24b4b-dfbd-431f-8244-098c40f7c24f","Type":"ContainerDied","Data":"e82514d0c463243a362e8f448012e954befa69bf19834cae92acea6cf9239bc7"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.932644 5049 scope.go:117] "RemoveContainer" containerID="3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.932861 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.937077 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_051db122-80f6-47fc-8d5c-5244d92e593d/ovn-northd/0.log" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.937125 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"051db122-80f6-47fc-8d5c-5244d92e593d","Type":"ContainerDied","Data":"8eebc4551fa1812d5581049820945aa29afebb166de480408ce90a328196b2f1"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.937194 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.940086 5049 generic.go:334] "Generic (PLEG): container finished" podID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerID="4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c" exitCode=0 Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.940138 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62ffcfe9-3e93-48ee-8d03-9b653d1bfede","Type":"ContainerDied","Data":"4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.940157 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"62ffcfe9-3e93-48ee-8d03-9b653d1bfede","Type":"ContainerDied","Data":"e0bb3f2dbaf364487d744f22f95a1db0b0f24769e9cdbed2ab3cc9c64857b3f3"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.940423 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.942579 5049 generic.go:334] "Generic (PLEG): container finished" podID="28327cb6-87a9-4b24-b8fb-f43c33076b1b" containerID="1aad04186f3f290c52f9e3c6f44246f78807c70f72c854c4cfa401d9f8b67ba3" exitCode=0 Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.942625 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28327cb6-87a9-4b24-b8fb-f43c33076b1b","Type":"ContainerDied","Data":"1aad04186f3f290c52f9e3c6f44246f78807c70f72c854c4cfa401d9f8b67ba3"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.949892 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"294e84c0-d49f-4e45-87d5-085c7accf51e","Type":"ContainerDied","Data":"5dbcff7b980f7f21cf78323dd18879804f6ca1ad81096ee5fdb4515233ff6492"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.949947 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.952012 5049 generic.go:334] "Generic (PLEG): container finished" podID="c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" containerID="064ca2d2f070edfd81bea729538cd5a51f364f47fe874ac76e3a571a97d5681c" exitCode=0 Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.952065 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679b885964-9p8nj" event={"ID":"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e","Type":"ContainerDied","Data":"064ca2d2f070edfd81bea729538cd5a51f364f47fe874ac76e3a571a97d5681c"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.959432 5049 scope.go:117] "RemoveContainer" containerID="a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.961339 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"de39a65a-7265-4418-a94b-f8f8f30c3807","Type":"ContainerDied","Data":"0dadf27e64cc64f8c046663c1ca2d44a658dcc8e4671b691f81c9bc864dcec3e"} Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.961505 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 17:21:26 crc kubenswrapper[5049]: I0127 17:21:26.986876 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.006844 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.047558 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.048846 5049 scope.go:117] "RemoveContainer" containerID="3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462" Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.051032 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462\": container with ID starting with 3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462 not found: ID does not exist" containerID="3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.051062 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462"} err="failed to get container status \"3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462\": rpc error: code = NotFound desc = could not find container \"3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462\": container with ID starting with 3058eb3d32e2416d54cad80e06c08b015a6883dba23fa9f79957453d1cd58462 not found: ID does not exist" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.051081 5049 scope.go:117] "RemoveContainer" containerID="a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b" Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.051283 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b\": container with ID starting with a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b not found: ID does not exist" containerID="a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.051301 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b"} err="failed to get container status \"a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b\": rpc error: code = NotFound desc = could not find container \"a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b\": container with ID starting with a29986ca75cb1699fa9d7fe36bd5312307aec498664cb9341a2e3a9d0ea59e2b not found: ID does not exist" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.051313 5049 scope.go:117] "RemoveContainer" containerID="bc748ff2fbd71fb24f80f8b730d7367d5fd71e407cbaf62490be6b914c76b0a8" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.055775 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.063739 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.065850 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-erlang-cookie\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.065898 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8qrt\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-kube-api-access-n8qrt\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.065933 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-confd\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.065965 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbb24b4b-dfbd-431f-8244-098c40f7c24f-erlang-cookie-secret\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.066032 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.066078 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-plugins\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.066099 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-tls\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.066117 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbb24b4b-dfbd-431f-8244-098c40f7c24f-pod-info\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.066134 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-server-conf\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.066150 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.066211 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-plugins-conf\") pod \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\" (UID: \"dbb24b4b-dfbd-431f-8244-098c40f7c24f\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.067010 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.067382 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.078488 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.079310 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.079342 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.079354 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.082037 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb24b4b-dfbd-431f-8244-098c40f7c24f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.088166 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-kube-api-access-n8qrt" (OuterVolumeSpecName: "kube-api-access-n8qrt") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "kube-api-access-n8qrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.088358 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.092149 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.100599 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/dbb24b4b-dfbd-431f-8244-098c40f7c24f-pod-info" (OuterVolumeSpecName: "pod-info") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.105865 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.109971 5049 scope.go:117] "RemoveContainer" containerID="ffdb84acf31942996807c242b98114c9c8d67e2eeaa568117f878ad3675f41d8" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.144345 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data" (OuterVolumeSpecName: "config-data") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.179314 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kolla-config\") pod \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.179462 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-combined-ca-bundle\") pod \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.179521 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95jk9\" (UniqueName: \"kubernetes.io/projected/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kube-api-access-95jk9\") pod \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.179562 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-config-data\") pod \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.179633 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-memcached-tls-certs\") pod \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\" (UID: \"28327cb6-87a9-4b24-b8fb-f43c33076b1b\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180017 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180041 5049 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180057 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180068 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8qrt\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-kube-api-access-n8qrt\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180079 5049 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dbb24b4b-dfbd-431f-8244-098c40f7c24f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180090 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180101 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180112 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180123 5049 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dbb24b4b-dfbd-431f-8244-098c40f7c24f-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.180949 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "28327cb6-87a9-4b24-b8fb-f43c33076b1b" (UID: "28327cb6-87a9-4b24-b8fb-f43c33076b1b"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.181470 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-config-data" (OuterVolumeSpecName: "config-data") pod "28327cb6-87a9-4b24-b8fb-f43c33076b1b" (UID: "28327cb6-87a9-4b24-b8fb-f43c33076b1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.205519 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kube-api-access-95jk9" (OuterVolumeSpecName: "kube-api-access-95jk9") pod "28327cb6-87a9-4b24-b8fb-f43c33076b1b" (UID: "28327cb6-87a9-4b24-b8fb-f43c33076b1b"). InnerVolumeSpecName "kube-api-access-95jk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.229762 5049 scope.go:117] "RemoveContainer" containerID="4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c" Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.278878 5049 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 27 17:21:27 crc kubenswrapper[5049]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-27T17:21:20Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 27 17:21:27 crc kubenswrapper[5049]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 27 17:21:27 crc kubenswrapper[5049]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-pv2qx" message=< Jan 27 17:21:27 crc kubenswrapper[5049]: Exiting ovn-controller (1) [FAILED] Jan 27 17:21:27 crc kubenswrapper[5049]: Killing ovn-controller (1) [ OK ] Jan 27 17:21:27 crc kubenswrapper[5049]: 2026-01-27T17:21:20Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 27 17:21:27 crc kubenswrapper[5049]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 27 17:21:27 crc kubenswrapper[5049]: > Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.278918 5049 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 27 17:21:27 crc kubenswrapper[5049]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-27T17:21:20Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 27 17:21:27 crc kubenswrapper[5049]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Jan 27 17:21:27 crc kubenswrapper[5049]: > pod="openstack/ovn-controller-pv2qx" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" containerID="cri-o://b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.278953 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-pv2qx" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" containerID="cri-o://b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482" gracePeriod=22 Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.284552 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95jk9\" (UniqueName: \"kubernetes.io/projected/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kube-api-access-95jk9\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.284584 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.284595 5049 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/28327cb6-87a9-4b24-b8fb-f43c33076b1b-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.319960 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28327cb6-87a9-4b24-b8fb-f43c33076b1b" (UID: "28327cb6-87a9-4b24-b8fb-f43c33076b1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.326307 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-server-conf" (OuterVolumeSpecName: "server-conf") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.333702 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.339505 5049 scope.go:117] "RemoveContainer" containerID="a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.353214 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "28327cb6-87a9-4b24-b8fb-f43c33076b1b" (UID: "28327cb6-87a9-4b24-b8fb-f43c33076b1b"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.377904 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "dbb24b4b-dfbd-431f-8244-098c40f7c24f" (UID: "dbb24b4b-dfbd-431f-8244-098c40f7c24f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.387578 5049 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.387609 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dbb24b4b-dfbd-431f-8244-098c40f7c24f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.387619 5049 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dbb24b4b-dfbd-431f-8244-098c40f7c24f-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.387627 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.387636 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28327cb6-87a9-4b24-b8fb-f43c33076b1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.672970 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" path="/var/lib/kubelet/pods/051db122-80f6-47fc-8d5c-5244d92e593d/volumes" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.673886 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" path="/var/lib/kubelet/pods/294e84c0-d49f-4e45-87d5-085c7accf51e/volumes" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.674379 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41e50a09-5c4d-4898-bdd7-16d85fa7c90d" path="/var/lib/kubelet/pods/41e50a09-5c4d-4898-bdd7-16d85fa7c90d/volumes" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.675294 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" path="/var/lib/kubelet/pods/62ffcfe9-3e93-48ee-8d03-9b653d1bfede/volumes" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.675918 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de39a65a-7265-4418-a94b-f8f8f30c3807" path="/var/lib/kubelet/pods/de39a65a-7265-4418-a94b-f8f8f30c3807/volumes" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.709543 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.746146 5049 scope.go:117] "RemoveContainer" containerID="4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c" Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.751516 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c\": container with ID starting with 4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c not found: ID does not exist" containerID="4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.751592 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c"} err="failed to get container status \"4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c\": rpc error: code = NotFound desc = could not find container \"4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c\": container with ID starting with 4e671c8bd986d52bd1e5185e5289863dc7f99ba1cde15ecbdb767105fbf5621c not found: ID does not exist" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.751643 5049 scope.go:117] "RemoveContainer" containerID="a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283" Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.754078 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283\": container with ID starting with a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283 not found: ID does not exist" containerID="a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.754120 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283"} err="failed to get container status \"a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283\": rpc error: code = NotFound desc = could not find container \"a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283\": container with ID starting with a2cc96849a7585da55c3fcd0c2a3f9b893b62b13c4ac2b87e3206b04bb909283 not found: ID does not exist" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.754145 5049 scope.go:117] "RemoveContainer" containerID="b7a20b9de92877a9ab934476ffa27a2a939c104c0bb643b1807fc727b4746d30" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.796123 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-combined-ca-bundle\") pod \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.796181 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znz92\" (UniqueName: \"kubernetes.io/projected/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-kube-api-access-znz92\") pod \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.796208 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-config-data\") pod \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.796250 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-public-tls-certs\") pod \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.796295 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-scripts\") pod \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.796446 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-fernet-keys\") pod \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.796482 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-credential-keys\") pod \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.796516 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-internal-tls-certs\") pod \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\" (UID: \"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.801298 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-kube-api-access-znz92" (OuterVolumeSpecName: "kube-api-access-znz92") pod "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" (UID: "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e"). InnerVolumeSpecName "kube-api-access-znz92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.804168 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" (UID: "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.810725 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" (UID: "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.811616 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.812826 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-scripts" (OuterVolumeSpecName: "scripts") pod "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" (UID: "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.813522 5049 scope.go:117] "RemoveContainer" containerID="cbe71e694f563bfe04548dc1dfb37796b16b1852241671e8d1a4cc3caf1b96a2" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.846317 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-config-data" (OuterVolumeSpecName: "config-data") pod "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" (UID: "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.850311 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" (UID: "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.850807 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pv2qx_389cf061-3e03-4e54-bf97-c88a747fd18b/ovn-controller/0.log" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.850912 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pv2qx" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.871870 5049 scope.go:117] "RemoveContainer" containerID="4c24478588bff8e90f1e2d67898dd6f439331bc9e8f594f828123fbc7e460d13" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.879692 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" (UID: "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.891493 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" (UID: "c3e00689-0036-4c1b-84ee-d4f97cfe2d3e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.900563 5049 scope.go:117] "RemoveContainer" containerID="f7c4e00da4c82ddc9caf7e857d44fe671979a2b534b5f360c2075b65d6c610d1" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.905857 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-sg-core-conf-yaml\") pod \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.905912 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-log-httpd\") pod \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.905951 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkptz\" (UniqueName: \"kubernetes.io/projected/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-kube-api-access-pkptz\") pod \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.905970 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-scripts\") pod \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906006 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-ceilometer-tls-certs\") pod \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906048 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-run-httpd\") pod \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906115 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-combined-ca-bundle\") pod \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906167 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-config-data\") pod \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\" (UID: \"c6b92aaa-ae4b-41ba-bd72-5e6d01518000\") " Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906427 5049 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906439 5049 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906449 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906457 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906466 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znz92\" (UniqueName: \"kubernetes.io/projected/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-kube-api-access-znz92\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906476 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906485 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.906493 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.907336 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c6b92aaa-ae4b-41ba-bd72-5e6d01518000" (UID: "c6b92aaa-ae4b-41ba-bd72-5e6d01518000"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.908715 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-scripts" (OuterVolumeSpecName: "scripts") pod "c6b92aaa-ae4b-41ba-bd72-5e6d01518000" (UID: "c6b92aaa-ae4b-41ba-bd72-5e6d01518000"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.909115 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c6b92aaa-ae4b-41ba-bd72-5e6d01518000" (UID: "c6b92aaa-ae4b-41ba-bd72-5e6d01518000"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.912296 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-kube-api-access-pkptz" (OuterVolumeSpecName: "kube-api-access-pkptz") pod "c6b92aaa-ae4b-41ba-bd72-5e6d01518000" (UID: "c6b92aaa-ae4b-41ba-bd72-5e6d01518000"). InnerVolumeSpecName "kube-api-access-pkptz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.919361 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.963103 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.966566 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.966596 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c6b92aaa-ae4b-41ba-bd72-5e6d01518000" (UID: "c6b92aaa-ae4b-41ba-bd72-5e6d01518000"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.969580 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 17:21:27 crc kubenswrapper[5049]: E0127 17:21:27.969640 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="6c4b4464-1c98-412b-96cf-235908a4eaf6" containerName="nova-cell0-conductor-conductor" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.982482 5049 generic.go:334] "Generic (PLEG): container finished" podID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerID="50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59" exitCode=0 Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.982584 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" event={"ID":"adfa2378-a75a-41b5-9ea9-71c8da89f750","Type":"ContainerDied","Data":"50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59"} Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.982621 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" event={"ID":"adfa2378-a75a-41b5-9ea9-71c8da89f750","Type":"ContainerDied","Data":"4b96bf1d48df908af869429f502c5e9a251dcc55a8adde567eee0b8a31a9912b"} Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.982655 5049 scope.go:117] "RemoveContainer" containerID="50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.983042 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fb468df94-7s5tf" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.985234 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679b885964-9p8nj" event={"ID":"c3e00689-0036-4c1b-84ee-d4f97cfe2d3e","Type":"ContainerDied","Data":"1daa8674aa8e85c04ee4652965be4a4ef4c6b9585153c6224aa29de71ec45ff9"} Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.985494 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679b885964-9p8nj" Jan 27 17:21:27 crc kubenswrapper[5049]: I0127 17:21:27.995444 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c6b92aaa-ae4b-41ba-bd72-5e6d01518000" (UID: "c6b92aaa-ae4b-41ba-bd72-5e6d01518000"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.001819 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pv2qx_389cf061-3e03-4e54-bf97-c88a747fd18b/ovn-controller/0.log" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.001951 5049 generic.go:334] "Generic (PLEG): container finished" podID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerID="b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482" exitCode=139 Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.002086 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pv2qx" event={"ID":"389cf061-3e03-4e54-bf97-c88a747fd18b","Type":"ContainerDied","Data":"b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482"} Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.002115 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pv2qx" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.002134 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pv2qx" event={"ID":"389cf061-3e03-4e54-bf97-c88a747fd18b","Type":"ContainerDied","Data":"bc9c1b18296b33c6aedf49a84f5e80627a49a399d8d320932db333159b09c46b"} Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.019033 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6b92aaa-ae4b-41ba-bd72-5e6d01518000" (UID: "c6b92aaa-ae4b-41ba-bd72-5e6d01518000"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.019984 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-combined-ca-bundle\") pod \"389cf061-3e03-4e54-bf97-c88a747fd18b\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020022 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run-ovn\") pod \"389cf061-3e03-4e54-bf97-c88a747fd18b\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020160 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxm67\" (UniqueName: \"kubernetes.io/projected/389cf061-3e03-4e54-bf97-c88a747fd18b-kube-api-access-vxm67\") pod \"389cf061-3e03-4e54-bf97-c88a747fd18b\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020221 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfa2378-a75a-41b5-9ea9-71c8da89f750-logs\") pod \"adfa2378-a75a-41b5-9ea9-71c8da89f750\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020277 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-combined-ca-bundle\") pod \"adfa2378-a75a-41b5-9ea9-71c8da89f750\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020315 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/389cf061-3e03-4e54-bf97-c88a747fd18b-scripts\") pod \"389cf061-3e03-4e54-bf97-c88a747fd18b\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020335 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmlk5\" (UniqueName: \"kubernetes.io/projected/adfa2378-a75a-41b5-9ea9-71c8da89f750-kube-api-access-wmlk5\") pod \"adfa2378-a75a-41b5-9ea9-71c8da89f750\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020356 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data\") pod \"adfa2378-a75a-41b5-9ea9-71c8da89f750\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020373 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-log-ovn\") pod \"389cf061-3e03-4e54-bf97-c88a747fd18b\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020393 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-ovn-controller-tls-certs\") pod \"389cf061-3e03-4e54-bf97-c88a747fd18b\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020458 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data-custom\") pod \"adfa2378-a75a-41b5-9ea9-71c8da89f750\" (UID: \"adfa2378-a75a-41b5-9ea9-71c8da89f750\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.020559 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run\") pod \"389cf061-3e03-4e54-bf97-c88a747fd18b\" (UID: \"389cf061-3e03-4e54-bf97-c88a747fd18b\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021103 5049 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021165 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkptz\" (UniqueName: \"kubernetes.io/projected/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-kube-api-access-pkptz\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021190 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021199 5049 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021211 5049 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021220 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021228 5049 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021280 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run" (OuterVolumeSpecName: "var-run") pod "389cf061-3e03-4e54-bf97-c88a747fd18b" (UID: "389cf061-3e03-4e54-bf97-c88a747fd18b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021615 5049 scope.go:117] "RemoveContainer" containerID="2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.021662 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "389cf061-3e03-4e54-bf97-c88a747fd18b" (UID: "389cf061-3e03-4e54-bf97-c88a747fd18b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.022836 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "389cf061-3e03-4e54-bf97-c88a747fd18b" (UID: "389cf061-3e03-4e54-bf97-c88a747fd18b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.023245 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/389cf061-3e03-4e54-bf97-c88a747fd18b-scripts" (OuterVolumeSpecName: "scripts") pod "389cf061-3e03-4e54-bf97-c88a747fd18b" (UID: "389cf061-3e03-4e54-bf97-c88a747fd18b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.023779 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adfa2378-a75a-41b5-9ea9-71c8da89f750-logs" (OuterVolumeSpecName: "logs") pod "adfa2378-a75a-41b5-9ea9-71c8da89f750" (UID: "adfa2378-a75a-41b5-9ea9-71c8da89f750"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.031852 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "adfa2378-a75a-41b5-9ea9-71c8da89f750" (UID: "adfa2378-a75a-41b5-9ea9-71c8da89f750"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.031928 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/389cf061-3e03-4e54-bf97-c88a747fd18b-kube-api-access-vxm67" (OuterVolumeSpecName: "kube-api-access-vxm67") pod "389cf061-3e03-4e54-bf97-c88a747fd18b" (UID: "389cf061-3e03-4e54-bf97-c88a747fd18b"). InnerVolumeSpecName "kube-api-access-vxm67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.034430 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-679b885964-9p8nj"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.043508 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-679b885964-9p8nj"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.046782 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfa2378-a75a-41b5-9ea9-71c8da89f750-kube-api-access-wmlk5" (OuterVolumeSpecName: "kube-api-access-wmlk5") pod "adfa2378-a75a-41b5-9ea9-71c8da89f750" (UID: "adfa2378-a75a-41b5-9ea9-71c8da89f750"). InnerVolumeSpecName "kube-api-access-wmlk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.047195 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.047871 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"28327cb6-87a9-4b24-b8fb-f43c33076b1b","Type":"ContainerDied","Data":"39f95b3a35ca5a16bb2f41054b4b2fd9b049f49c3702b34e41b0d26dd9cb8170"} Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.051964 5049 generic.go:334] "Generic (PLEG): container finished" podID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerID="89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71" exitCode=0 Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.052056 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.052151 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerDied","Data":"89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71"} Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.052218 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c6b92aaa-ae4b-41ba-bd72-5e6d01518000","Type":"ContainerDied","Data":"910f229bd79131c626c319e3474800d8ecce82d34d630f88a2fe24a6791b0b08"} Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.053378 5049 scope.go:117] "RemoveContainer" containerID="50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59" Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.055401 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59\": container with ID starting with 50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59 not found: ID does not exist" containerID="50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.055522 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59"} err="failed to get container status \"50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59\": rpc error: code = NotFound desc = could not find container \"50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59\": container with ID starting with 50cdd79f3cf12114854b53391e39352052915fc423551b6aa65b2a4bd254ce59 not found: ID does not exist" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.055603 5049 scope.go:117] "RemoveContainer" containerID="2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b" Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.058546 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b\": container with ID starting with 2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b not found: ID does not exist" containerID="2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.058634 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b"} err="failed to get container status \"2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b\": rpc error: code = NotFound desc = could not find container \"2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b\": container with ID starting with 2f71706bcdb06fe2e17742b9ee8311b8e73605a2109fa8b21e0f7b3b6738662b not found: ID does not exist" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.058738 5049 scope.go:117] "RemoveContainer" containerID="064ca2d2f070edfd81bea729538cd5a51f364f47fe874ac76e3a571a97d5681c" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.075119 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adfa2378-a75a-41b5-9ea9-71c8da89f750" (UID: "adfa2378-a75a-41b5-9ea9-71c8da89f750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.109921 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-config-data" (OuterVolumeSpecName: "config-data") pod "c6b92aaa-ae4b-41ba-bd72-5e6d01518000" (UID: "c6b92aaa-ae4b-41ba-bd72-5e6d01518000"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.112446 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data" (OuterVolumeSpecName: "config-data") pod "adfa2378-a75a-41b5-9ea9-71c8da89f750" (UID: "adfa2378-a75a-41b5-9ea9-71c8da89f750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.115740 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "389cf061-3e03-4e54-bf97-c88a747fd18b" (UID: "389cf061-3e03-4e54-bf97-c88a747fd18b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.116988 5049 scope.go:117] "RemoveContainer" containerID="b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125360 5049 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125381 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125397 5049 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125406 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxm67\" (UniqueName: \"kubernetes.io/projected/389cf061-3e03-4e54-bf97-c88a747fd18b-kube-api-access-vxm67\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125415 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adfa2378-a75a-41b5-9ea9-71c8da89f750-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125423 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125431 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/389cf061-3e03-4e54-bf97-c88a747fd18b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125440 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmlk5\" (UniqueName: \"kubernetes.io/projected/adfa2378-a75a-41b5-9ea9-71c8da89f750-kube-api-access-wmlk5\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125447 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125455 5049 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/389cf061-3e03-4e54-bf97-c88a747fd18b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125463 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adfa2378-a75a-41b5-9ea9-71c8da89f750-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.125471 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b92aaa-ae4b-41ba-bd72-5e6d01518000-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.132281 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.149610 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.155917 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "389cf061-3e03-4e54-bf97-c88a747fd18b" (UID: "389cf061-3e03-4e54-bf97-c88a747fd18b"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.157895 5049 scope.go:117] "RemoveContainer" containerID="b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482" Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.158493 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482\": container with ID starting with b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482 not found: ID does not exist" containerID="b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.158526 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482"} err="failed to get container status \"b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482\": rpc error: code = NotFound desc = could not find container \"b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482\": container with ID starting with b6b4e3c2a3c34184115ffa3dff467c9f9a271bdbcaaf1bf9c691d523be4cc482 not found: ID does not exist" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.158570 5049 scope.go:117] "RemoveContainer" containerID="1aad04186f3f290c52f9e3c6f44246f78807c70f72c854c4cfa401d9f8b67ba3" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.192717 5049 scope.go:117] "RemoveContainer" containerID="699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.208958 5049 scope.go:117] "RemoveContainer" containerID="4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.230379 5049 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/389cf061-3e03-4e54-bf97-c88a747fd18b-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.251848 5049 scope.go:117] "RemoveContainer" containerID="89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.293597 5049 scope.go:117] "RemoveContainer" containerID="0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.326931 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-fb468df94-7s5tf"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.352786 5049 scope.go:117] "RemoveContainer" containerID="699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9" Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.353326 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9\": container with ID starting with 699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9 not found: ID does not exist" containerID="699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.353372 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9"} err="failed to get container status \"699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9\": rpc error: code = NotFound desc = could not find container \"699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9\": container with ID starting with 699c3c69ddd586c2ba6efa14873e1ef576bd164da6c7b87cfc3147d472bbdbd9 not found: ID does not exist" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.353403 5049 scope.go:117] "RemoveContainer" containerID="4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4" Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.360803 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4\": container with ID starting with 4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4 not found: ID does not exist" containerID="4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.360844 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4"} err="failed to get container status \"4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4\": rpc error: code = NotFound desc = could not find container \"4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4\": container with ID starting with 4a1f580388f9867d0121ec63028c1c96323cf21dcefe575dbced96cf285661e4 not found: ID does not exist" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.360877 5049 scope.go:117] "RemoveContainer" containerID="89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71" Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.361321 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71\": container with ID starting with 89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71 not found: ID does not exist" containerID="89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.361366 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71"} err="failed to get container status \"89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71\": rpc error: code = NotFound desc = could not find container \"89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71\": container with ID starting with 89a7551a874ceab6f02852e0381c82d453fda16064fce545e571eeeac5b7ce71 not found: ID does not exist" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.361396 5049 scope.go:117] "RemoveContainer" containerID="0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69" Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.361763 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69\": container with ID starting with 0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69 not found: ID does not exist" containerID="0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.361804 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69"} err="failed to get container status \"0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69\": rpc error: code = NotFound desc = could not find container \"0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69\": container with ID starting with 0a77e3ad3f76bbeeb3b5ada4b0ddb6ff8c950ed1581373853362db15eb5b6c69 not found: ID does not exist" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.362136 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-fb468df94-7s5tf"] Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.407703 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38 is running failed: container process not found" containerID="abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.408105 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38 is running failed: container process not found" containerID="abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.408484 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38 is running failed: container process not found" containerID="abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 17:21:28 crc kubenswrapper[5049]: E0127 17:21:28.408551 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="22dc9694-6c5e-4ac3-99e3-910dac92573a" containerName="nova-cell1-conductor-conductor" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.413852 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pv2qx"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.427476 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pv2qx"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.432468 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.436978 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.491482 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.633824 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-combined-ca-bundle\") pod \"22dc9694-6c5e-4ac3-99e3-910dac92573a\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.633892 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-config-data\") pod \"22dc9694-6c5e-4ac3-99e3-910dac92573a\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.634052 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mwnq\" (UniqueName: \"kubernetes.io/projected/22dc9694-6c5e-4ac3-99e3-910dac92573a-kube-api-access-7mwnq\") pod \"22dc9694-6c5e-4ac3-99e3-910dac92573a\" (UID: \"22dc9694-6c5e-4ac3-99e3-910dac92573a\") " Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.659130 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22dc9694-6c5e-4ac3-99e3-910dac92573a-kube-api-access-7mwnq" (OuterVolumeSpecName: "kube-api-access-7mwnq") pod "22dc9694-6c5e-4ac3-99e3-910dac92573a" (UID: "22dc9694-6c5e-4ac3-99e3-910dac92573a"). InnerVolumeSpecName "kube-api-access-7mwnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.660585 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-config-data" (OuterVolumeSpecName: "config-data") pod "22dc9694-6c5e-4ac3-99e3-910dac92573a" (UID: "22dc9694-6c5e-4ac3-99e3-910dac92573a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.673208 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22dc9694-6c5e-4ac3-99e3-910dac92573a" (UID: "22dc9694-6c5e-4ac3-99e3-910dac92573a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.735705 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mwnq\" (UniqueName: \"kubernetes.io/projected/22dc9694-6c5e-4ac3-99e3-910dac92573a-kube-api-access-7mwnq\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.735742 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:28 crc kubenswrapper[5049]: I0127 17:21:28.735755 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22dc9694-6c5e-4ac3-99e3-910dac92573a-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.074978 5049 generic.go:334] "Generic (PLEG): container finished" podID="22dc9694-6c5e-4ac3-99e3-910dac92573a" containerID="abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38" exitCode=0 Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.075035 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.075044 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22dc9694-6c5e-4ac3-99e3-910dac92573a","Type":"ContainerDied","Data":"abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38"} Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.075066 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"22dc9694-6c5e-4ac3-99e3-910dac92573a","Type":"ContainerDied","Data":"5c22ad211a63886705bf0766f5de3774ec9c809eb6079254f8059f8f06f05170"} Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.075082 5049 scope.go:117] "RemoveContainer" containerID="abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.095633 5049 scope.go:117] "RemoveContainer" containerID="abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38" Jan 27 17:21:29 crc kubenswrapper[5049]: E0127 17:21:29.096111 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38\": container with ID starting with abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38 not found: ID does not exist" containerID="abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.096157 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38"} err="failed to get container status \"abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38\": rpc error: code = NotFound desc = could not find container \"abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38\": container with ID starting with abd6fd06623cddef6f405090a0977ee13c93b091c34461553e23d2897929fd38 not found: ID does not exist" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.112808 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.118051 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.657772 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22dc9694-6c5e-4ac3-99e3-910dac92573a" path="/var/lib/kubelet/pods/22dc9694-6c5e-4ac3-99e3-910dac92573a/volumes" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.658469 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28327cb6-87a9-4b24-b8fb-f43c33076b1b" path="/var/lib/kubelet/pods/28327cb6-87a9-4b24-b8fb-f43c33076b1b/volumes" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.659110 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" path="/var/lib/kubelet/pods/389cf061-3e03-4e54-bf97-c88a747fd18b/volumes" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.660336 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" path="/var/lib/kubelet/pods/adfa2378-a75a-41b5-9ea9-71c8da89f750/volumes" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.661014 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" path="/var/lib/kubelet/pods/c3e00689-0036-4c1b-84ee-d4f97cfe2d3e/volumes" Jan 27 17:21:29 crc kubenswrapper[5049]: I0127 17:21:29.661625 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" path="/var/lib/kubelet/pods/c6b92aaa-ae4b-41ba-bd72-5e6d01518000/volumes" Jan 27 17:21:30 crc kubenswrapper[5049]: E0127 17:21:30.135644 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:30 crc kubenswrapper[5049]: E0127 17:21:30.136936 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:30 crc kubenswrapper[5049]: E0127 17:21:30.137255 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:30 crc kubenswrapper[5049]: E0127 17:21:30.137297 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:21:30 crc kubenswrapper[5049]: E0127 17:21:30.137849 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:30 crc kubenswrapper[5049]: E0127 17:21:30.140042 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:30 crc kubenswrapper[5049]: E0127 17:21:30.142456 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:30 crc kubenswrapper[5049]: E0127 17:21:30.142551 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.776821 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.869412 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7ndl\" (UniqueName: \"kubernetes.io/projected/6c4b4464-1c98-412b-96cf-235908a4eaf6-kube-api-access-x7ndl\") pod \"6c4b4464-1c98-412b-96cf-235908a4eaf6\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.869717 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-combined-ca-bundle\") pod \"6c4b4464-1c98-412b-96cf-235908a4eaf6\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.869838 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-config-data\") pod \"6c4b4464-1c98-412b-96cf-235908a4eaf6\" (UID: \"6c4b4464-1c98-412b-96cf-235908a4eaf6\") " Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.875480 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4b4464-1c98-412b-96cf-235908a4eaf6-kube-api-access-x7ndl" (OuterVolumeSpecName: "kube-api-access-x7ndl") pod "6c4b4464-1c98-412b-96cf-235908a4eaf6" (UID: "6c4b4464-1c98-412b-96cf-235908a4eaf6"). InnerVolumeSpecName "kube-api-access-x7ndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.893720 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-config-data" (OuterVolumeSpecName: "config-data") pod "6c4b4464-1c98-412b-96cf-235908a4eaf6" (UID: "6c4b4464-1c98-412b-96cf-235908a4eaf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.898962 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c4b4464-1c98-412b-96cf-235908a4eaf6" (UID: "6c4b4464-1c98-412b-96cf-235908a4eaf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.971122 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7ndl\" (UniqueName: \"kubernetes.io/projected/6c4b4464-1c98-412b-96cf-235908a4eaf6-kube-api-access-x7ndl\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.971157 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:30 crc kubenswrapper[5049]: I0127 17:21:30.971167 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4b4464-1c98-412b-96cf-235908a4eaf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.106975 5049 generic.go:334] "Generic (PLEG): container finished" podID="6c4b4464-1c98-412b-96cf-235908a4eaf6" containerID="7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba" exitCode=0 Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.107028 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6c4b4464-1c98-412b-96cf-235908a4eaf6","Type":"ContainerDied","Data":"7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba"} Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.107067 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6c4b4464-1c98-412b-96cf-235908a4eaf6","Type":"ContainerDied","Data":"804ee76150fc11e3d13be331c997a6674afa1c433ee76dcbd92da93c91445dfd"} Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.107100 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.107118 5049 scope.go:117] "RemoveContainer" containerID="7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba" Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.180877 5049 scope.go:117] "RemoveContainer" containerID="7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba" Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.187160 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 17:21:31 crc kubenswrapper[5049]: E0127 17:21:31.190764 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba\": container with ID starting with 7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba not found: ID does not exist" containerID="7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba" Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.190801 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba"} err="failed to get container status \"7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba\": rpc error: code = NotFound desc = could not find container \"7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba\": container with ID starting with 7be888e758a478213132ac01cc29b7c2135636b1c5ed32ffe6f6da7daa6ab0ba not found: ID does not exist" Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.192327 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 17:21:31 crc kubenswrapper[5049]: I0127 17:21:31.678747 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4b4464-1c98-412b-96cf-235908a4eaf6" path="/var/lib/kubelet/pods/6c4b4464-1c98-412b-96cf-235908a4eaf6/volumes" Jan 27 17:21:32 crc kubenswrapper[5049]: I0127 17:21:32.886872 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:21:32 crc kubenswrapper[5049]: I0127 17:21:32.888766 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:21:35 crc kubenswrapper[5049]: E0127 17:21:35.136656 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:35 crc kubenswrapper[5049]: E0127 17:21:35.137299 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:35 crc kubenswrapper[5049]: E0127 17:21:35.138131 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:35 crc kubenswrapper[5049]: E0127 17:21:35.138174 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:21:35 crc kubenswrapper[5049]: E0127 17:21:35.138100 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:35 crc kubenswrapper[5049]: E0127 17:21:35.140058 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:35 crc kubenswrapper[5049]: E0127 17:21:35.141943 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:35 crc kubenswrapper[5049]: E0127 17:21:35.142003 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:21:36 crc kubenswrapper[5049]: I0127 17:21:36.801864 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6ffd87fcd5-fn4z7" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.161:9696/\": dial tcp 10.217.0.161:9696: connect: connection refused" Jan 27 17:21:37 crc kubenswrapper[5049]: I0127 17:21:37.896975 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:21:37 crc kubenswrapper[5049]: I0127 17:21:37.897026 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:21:40 crc kubenswrapper[5049]: E0127 17:21:40.136141 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:40 crc kubenswrapper[5049]: E0127 17:21:40.137878 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:40 crc kubenswrapper[5049]: E0127 17:21:40.137997 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:40 crc kubenswrapper[5049]: E0127 17:21:40.138622 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:40 crc kubenswrapper[5049]: E0127 17:21:40.138690 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:21:40 crc kubenswrapper[5049]: E0127 17:21:40.142731 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:40 crc kubenswrapper[5049]: E0127 17:21:40.144637 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:40 crc kubenswrapper[5049]: E0127 17:21:40.144767 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:21:41 crc kubenswrapper[5049]: I0127 17:21:41.976318 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.164285 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-config\") pod \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.164372 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-ovndb-tls-certs\") pod \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.164430 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-httpd-config\") pod \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.164492 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-public-tls-certs\") pod \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.164579 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-combined-ca-bundle\") pod \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.164731 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5kmb\" (UniqueName: \"kubernetes.io/projected/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-kube-api-access-h5kmb\") pod \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.164855 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-internal-tls-certs\") pod \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\" (UID: \"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5\") " Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.171172 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-kube-api-access-h5kmb" (OuterVolumeSpecName: "kube-api-access-h5kmb") pod "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" (UID: "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5"). InnerVolumeSpecName "kube-api-access-h5kmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.195904 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" (UID: "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.218501 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" (UID: "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.237193 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" (UID: "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.238177 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" (UID: "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.248089 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-config" (OuterVolumeSpecName: "config") pod "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" (UID: "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.250589 5049 generic.go:334] "Generic (PLEG): container finished" podID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerID="1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b" exitCode=0 Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.250639 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffd87fcd5-fn4z7" event={"ID":"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5","Type":"ContainerDied","Data":"1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b"} Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.250714 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ffd87fcd5-fn4z7" event={"ID":"8e0b118e-d036-4ae2-ac85-5ab90eeea2f5","Type":"ContainerDied","Data":"bac231595afc205a3cd4fe58811d3bc5fcfe4bf090e3230f905e6a02cba693c9"} Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.250739 5049 scope.go:117] "RemoveContainer" containerID="2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.250920 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ffd87fcd5-fn4z7" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.252115 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" (UID: "8e0b118e-d036-4ae2-ac85-5ab90eeea2f5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.268162 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.268191 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.268201 5049 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.268211 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.268219 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.268227 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.268236 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5kmb\" (UniqueName: \"kubernetes.io/projected/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5-kube-api-access-h5kmb\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.288961 5049 scope.go:117] "RemoveContainer" containerID="1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.322373 5049 scope.go:117] "RemoveContainer" containerID="2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4" Jan 27 17:21:42 crc kubenswrapper[5049]: E0127 17:21:42.322930 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4\": container with ID starting with 2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4 not found: ID does not exist" containerID="2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.322956 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4"} err="failed to get container status \"2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4\": rpc error: code = NotFound desc = could not find container \"2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4\": container with ID starting with 2313102ae32c4e0748a8107e17b26893550ff00f6401ee6e5754bc7a542c6ec4 not found: ID does not exist" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.322975 5049 scope.go:117] "RemoveContainer" containerID="1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b" Jan 27 17:21:42 crc kubenswrapper[5049]: E0127 17:21:42.323171 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b\": container with ID starting with 1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b not found: ID does not exist" containerID="1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.323190 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b"} err="failed to get container status \"1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b\": rpc error: code = NotFound desc = could not find container \"1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b\": container with ID starting with 1c2d3e443ad2b2895c21e0101df97d1926bdd94413808edf387efa19ebe1729b not found: ID does not exist" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.586255 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6ffd87fcd5-fn4z7"] Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.604426 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6ffd87fcd5-fn4z7"] Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.906989 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:21:42 crc kubenswrapper[5049]: I0127 17:21:42.906989 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:21:43 crc kubenswrapper[5049]: I0127 17:21:43.655513 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" path="/var/lib/kubelet/pods/8e0b118e-d036-4ae2-ac85-5ab90eeea2f5/volumes" Jan 27 17:21:45 crc kubenswrapper[5049]: E0127 17:21:45.136029 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:45 crc kubenswrapper[5049]: E0127 17:21:45.137088 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:45 crc kubenswrapper[5049]: E0127 17:21:45.137620 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:45 crc kubenswrapper[5049]: E0127 17:21:45.137721 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:21:45 crc kubenswrapper[5049]: E0127 17:21:45.138618 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:45 crc kubenswrapper[5049]: E0127 17:21:45.140724 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:45 crc kubenswrapper[5049]: E0127 17:21:45.142647 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:45 crc kubenswrapper[5049]: E0127 17:21:45.142739 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:21:47 crc kubenswrapper[5049]: I0127 17:21:47.782270 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:21:47 crc kubenswrapper[5049]: I0127 17:21:47.782738 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:21:47 crc kubenswrapper[5049]: I0127 17:21:47.782827 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:21:47 crc kubenswrapper[5049]: I0127 17:21:47.784098 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"081b74af340b03286f3b46d2254afe32fe4625cc1e5446a6c08c340a2428ad40"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:21:47 crc kubenswrapper[5049]: I0127 17:21:47.784235 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://081b74af340b03286f3b46d2254afe32fe4625cc1e5446a6c08c340a2428ad40" gracePeriod=600 Jan 27 17:21:47 crc kubenswrapper[5049]: I0127 17:21:47.915878 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 17:21:47 crc kubenswrapper[5049]: I0127 17:21:47.915969 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 17:21:48 crc kubenswrapper[5049]: I0127 17:21:48.319324 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="081b74af340b03286f3b46d2254afe32fe4625cc1e5446a6c08c340a2428ad40" exitCode=0 Jan 27 17:21:48 crc kubenswrapper[5049]: I0127 17:21:48.319402 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"081b74af340b03286f3b46d2254afe32fe4625cc1e5446a6c08c340a2428ad40"} Jan 27 17:21:48 crc kubenswrapper[5049]: I0127 17:21:48.319440 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00"} Jan 27 17:21:48 crc kubenswrapper[5049]: I0127 17:21:48.319467 5049 scope.go:117] "RemoveContainer" containerID="690eb8dd99a38db0e2d128dc8fae0eb0e7ee256d3467527d01896edbadf9fc55" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.336998 5049 generic.go:334] "Generic (PLEG): container finished" podID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerID="1ad029fde4bbfe950ea64c277ff2274dbaf9c65f928fb3d6f8c204dfa84b5ab2" exitCode=137 Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.337478 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d76a4d6-b3a5-4931-9fb1-13531143ebaa","Type":"ContainerDied","Data":"1ad029fde4bbfe950ea64c277ff2274dbaf9c65f928fb3d6f8c204dfa84b5ab2"} Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.619560 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.795114 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-combined-ca-bundle\") pod \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.795438 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-etc-machine-id\") pod \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.795474 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data-custom\") pod \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.795544 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-scripts\") pod \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.795577 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzptm\" (UniqueName: \"kubernetes.io/projected/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-kube-api-access-lzptm\") pod \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.795578 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d76a4d6-b3a5-4931-9fb1-13531143ebaa" (UID: "0d76a4d6-b3a5-4931-9fb1-13531143ebaa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.795647 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data\") pod \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\" (UID: \"0d76a4d6-b3a5-4931-9fb1-13531143ebaa\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.795938 5049 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.803005 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d76a4d6-b3a5-4931-9fb1-13531143ebaa" (UID: "0d76a4d6-b3a5-4931-9fb1-13531143ebaa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.814147 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-scripts" (OuterVolumeSpecName: "scripts") pod "0d76a4d6-b3a5-4931-9fb1-13531143ebaa" (UID: "0d76a4d6-b3a5-4931-9fb1-13531143ebaa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.818543 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-kube-api-access-lzptm" (OuterVolumeSpecName: "kube-api-access-lzptm") pod "0d76a4d6-b3a5-4931-9fb1-13531143ebaa" (UID: "0d76a4d6-b3a5-4931-9fb1-13531143ebaa"). InnerVolumeSpecName "kube-api-access-lzptm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.843696 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d76a4d6-b3a5-4931-9fb1-13531143ebaa" (UID: "0d76a4d6-b3a5-4931-9fb1-13531143ebaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.846096 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.875855 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data" (OuterVolumeSpecName: "config-data") pod "0d76a4d6-b3a5-4931-9fb1-13531143ebaa" (UID: "0d76a4d6-b3a5-4931-9fb1-13531143ebaa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.897390 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.897423 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.897435 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.897442 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.897450 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzptm\" (UniqueName: \"kubernetes.io/projected/0d76a4d6-b3a5-4931-9fb1-13531143ebaa-kube-api-access-lzptm\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.998609 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af4a67e-8714-4d41-ab32-7b2e526a0799-combined-ca-bundle\") pod \"0af4a67e-8714-4d41-ab32-7b2e526a0799\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.998647 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") pod \"0af4a67e-8714-4d41-ab32-7b2e526a0799\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.998744 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj9ms\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-kube-api-access-lj9ms\") pod \"0af4a67e-8714-4d41-ab32-7b2e526a0799\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.998772 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-cache\") pod \"0af4a67e-8714-4d41-ab32-7b2e526a0799\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.998843 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"0af4a67e-8714-4d41-ab32-7b2e526a0799\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " Jan 27 17:21:49 crc kubenswrapper[5049]: I0127 17:21:49.998872 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-lock\") pod \"0af4a67e-8714-4d41-ab32-7b2e526a0799\" (UID: \"0af4a67e-8714-4d41-ab32-7b2e526a0799\") " Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:49.999604 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-cache" (OuterVolumeSpecName: "cache") pod "0af4a67e-8714-4d41-ab32-7b2e526a0799" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:49.999794 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-lock" (OuterVolumeSpecName: "lock") pod "0af4a67e-8714-4d41-ab32-7b2e526a0799" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.002033 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0af4a67e-8714-4d41-ab32-7b2e526a0799" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.002322 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "swift") pod "0af4a67e-8714-4d41-ab32-7b2e526a0799" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.003864 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-kube-api-access-lj9ms" (OuterVolumeSpecName: "kube-api-access-lj9ms") pod "0af4a67e-8714-4d41-ab32-7b2e526a0799" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799"). InnerVolumeSpecName "kube-api-access-lj9ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.101257 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj9ms\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-kube-api-access-lj9ms\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.101307 5049 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-cache\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.101353 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.101376 5049 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0af4a67e-8714-4d41-ab32-7b2e526a0799-lock\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.101393 5049 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0af4a67e-8714-4d41-ab32-7b2e526a0799-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.115525 5049 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.136253 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.136174 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4 is running failed: container process not found" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.136921 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.136993 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4 is running failed: container process not found" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.137751 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4 is running failed: container process not found" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.137800 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.137751 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.137861 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-7s8s5" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.202536 5049 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.289301 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0af4a67e-8714-4d41-ab32-7b2e526a0799-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0af4a67e-8714-4d41-ab32-7b2e526a0799" (UID: "0af4a67e-8714-4d41-ab32-7b2e526a0799"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.304580 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af4a67e-8714-4d41-ab32-7b2e526a0799-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.356197 5049 generic.go:334] "Generic (PLEG): container finished" podID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerID="e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b" exitCode=137 Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.356260 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b"} Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.356290 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0af4a67e-8714-4d41-ab32-7b2e526a0799","Type":"ContainerDied","Data":"ad82617d8aed3b32808c93afe54d1c8ea6d727e32e61acf0b8b6c755610cdf61"} Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.356307 5049 scope.go:117] "RemoveContainer" containerID="e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.356465 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.360530 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7s8s5_009eaa47-1d7c-46e6-aeea-b25f77ea35a9/ovs-vswitchd/0.log" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.365928 5049 generic.go:334] "Generic (PLEG): container finished" podID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" exitCode=137 Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.366001 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7s8s5" event={"ID":"009eaa47-1d7c-46e6-aeea-b25f77ea35a9","Type":"ContainerDied","Data":"c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4"} Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.366031 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7s8s5" event={"ID":"009eaa47-1d7c-46e6-aeea-b25f77ea35a9","Type":"ContainerDied","Data":"bcc01a0403691fbcf568a97f26783937c01c259b6c035aa9b7379ca70667d7f0"} Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.366045 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc01a0403691fbcf568a97f26783937c01c259b6c035aa9b7379ca70667d7f0" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.367394 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7s8s5_009eaa47-1d7c-46e6-aeea-b25f77ea35a9/ovs-vswitchd/0.log" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.368566 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.369554 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d76a4d6-b3a5-4931-9fb1-13531143ebaa","Type":"ContainerDied","Data":"045e10e4a3d1c58210da9330a8483667bd509d77dfc2cf6efe56ef6f872afb29"} Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.369682 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.400318 5049 scope.go:117] "RemoveContainer" containerID="3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.415448 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.421093 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.439577 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.445518 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.471480 5049 scope.go:117] "RemoveContainer" containerID="49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.494449 5049 scope.go:117] "RemoveContainer" containerID="6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511369 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp5f6\" (UniqueName: \"kubernetes.io/projected/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-kube-api-access-zp5f6\") pod \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511487 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-etc-ovs\") pod \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511529 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-log\") pod \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511555 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-run\") pod \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511616 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-scripts\") pod \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511628 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "009eaa47-1d7c-46e6-aeea-b25f77ea35a9" (UID: "009eaa47-1d7c-46e6-aeea-b25f77ea35a9"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511693 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-lib\") pod \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\" (UID: \"009eaa47-1d7c-46e6-aeea-b25f77ea35a9\") " Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511706 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-run" (OuterVolumeSpecName: "var-run") pod "009eaa47-1d7c-46e6-aeea-b25f77ea35a9" (UID: "009eaa47-1d7c-46e6-aeea-b25f77ea35a9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511707 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-log" (OuterVolumeSpecName: "var-log") pod "009eaa47-1d7c-46e6-aeea-b25f77ea35a9" (UID: "009eaa47-1d7c-46e6-aeea-b25f77ea35a9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.511791 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-lib" (OuterVolumeSpecName: "var-lib") pod "009eaa47-1d7c-46e6-aeea-b25f77ea35a9" (UID: "009eaa47-1d7c-46e6-aeea-b25f77ea35a9"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.512008 5049 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.512341 5049 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.513078 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-scripts" (OuterVolumeSpecName: "scripts") pod "009eaa47-1d7c-46e6-aeea-b25f77ea35a9" (UID: "009eaa47-1d7c-46e6-aeea-b25f77ea35a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.523148 5049 scope.go:117] "RemoveContainer" containerID="d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.538947 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-kube-api-access-zp5f6" (OuterVolumeSpecName: "kube-api-access-zp5f6") pod "009eaa47-1d7c-46e6-aeea-b25f77ea35a9" (UID: "009eaa47-1d7c-46e6-aeea-b25f77ea35a9"). InnerVolumeSpecName "kube-api-access-zp5f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.555078 5049 scope.go:117] "RemoveContainer" containerID="1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.574120 5049 scope.go:117] "RemoveContainer" containerID="eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.594565 5049 scope.go:117] "RemoveContainer" containerID="3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.611252 5049 scope.go:117] "RemoveContainer" containerID="a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.614346 5049 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-lib\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.614389 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp5f6\" (UniqueName: \"kubernetes.io/projected/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-kube-api-access-zp5f6\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.614408 5049 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.614428 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/009eaa47-1d7c-46e6-aeea-b25f77ea35a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.632216 5049 scope.go:117] "RemoveContainer" containerID="a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.651921 5049 scope.go:117] "RemoveContainer" containerID="22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.684255 5049 scope.go:117] "RemoveContainer" containerID="c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.726568 5049 scope.go:117] "RemoveContainer" containerID="224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.747269 5049 scope.go:117] "RemoveContainer" containerID="03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.765277 5049 scope.go:117] "RemoveContainer" containerID="a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.819945 5049 scope.go:117] "RemoveContainer" containerID="e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.820781 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b\": container with ID starting with e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b not found: ID does not exist" containerID="e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.820814 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b"} err="failed to get container status \"e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b\": rpc error: code = NotFound desc = could not find container \"e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b\": container with ID starting with e31de55fb4f8fa7f46b4db48cfa14b2dcd4abbdcb8b59e5f1b5095831edf1d4b not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.820832 5049 scope.go:117] "RemoveContainer" containerID="3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.827264 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e\": container with ID starting with 3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e not found: ID does not exist" containerID="3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.827303 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e"} err="failed to get container status \"3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e\": rpc error: code = NotFound desc = could not find container \"3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e\": container with ID starting with 3a93d9cd74365dc4b079066ac2c67767791d85d61773322bf02b5a01b937828e not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.827328 5049 scope.go:117] "RemoveContainer" containerID="49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.827615 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084\": container with ID starting with 49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084 not found: ID does not exist" containerID="49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.827637 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084"} err="failed to get container status \"49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084\": rpc error: code = NotFound desc = could not find container \"49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084\": container with ID starting with 49580b0e03bfc33665c28a177a2a91fdc57ef1f9597020b693192a0906c2b084 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.827651 5049 scope.go:117] "RemoveContainer" containerID="6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.831132 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec\": container with ID starting with 6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec not found: ID does not exist" containerID="6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.831162 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec"} err="failed to get container status \"6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec\": rpc error: code = NotFound desc = could not find container \"6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec\": container with ID starting with 6cb007db423c0f0689563185212e4dbebe9824bceac2708df056fd0a50a20fec not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.831181 5049 scope.go:117] "RemoveContainer" containerID="d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.832055 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0\": container with ID starting with d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0 not found: ID does not exist" containerID="d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.832083 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0"} err="failed to get container status \"d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0\": rpc error: code = NotFound desc = could not find container \"d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0\": container with ID starting with d642302c59a400ff1d5fcfc04bd4f3e11605424d17a253b3bbb525c32f0483b0 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.832100 5049 scope.go:117] "RemoveContainer" containerID="1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.832381 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1\": container with ID starting with 1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1 not found: ID does not exist" containerID="1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.832401 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1"} err="failed to get container status \"1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1\": rpc error: code = NotFound desc = could not find container \"1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1\": container with ID starting with 1822fafc255330bb427411a6c035e5846b79b8c89c234fa42ba8b370aa1361a1 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.832413 5049 scope.go:117] "RemoveContainer" containerID="eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.832650 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055\": container with ID starting with eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055 not found: ID does not exist" containerID="eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.832681 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055"} err="failed to get container status \"eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055\": rpc error: code = NotFound desc = could not find container \"eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055\": container with ID starting with eb157c80aa43da636b4e1b90406a76023c1144590efb8be2dbcad94062ef1055 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.832692 5049 scope.go:117] "RemoveContainer" containerID="3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.835224 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a\": container with ID starting with 3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a not found: ID does not exist" containerID="3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.835247 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a"} err="failed to get container status \"3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a\": rpc error: code = NotFound desc = could not find container \"3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a\": container with ID starting with 3b0b393e0e4d1401963714ab9252d9671cfb2791dc7e153bd5d4476e4584159a not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.835262 5049 scope.go:117] "RemoveContainer" containerID="a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.835815 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537\": container with ID starting with a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537 not found: ID does not exist" containerID="a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.835835 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537"} err="failed to get container status \"a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537\": rpc error: code = NotFound desc = could not find container \"a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537\": container with ID starting with a5245f69092058d5c8b04c536b6c645c68af7ffcd316ef3d92d2eec0e910b537 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.835848 5049 scope.go:117] "RemoveContainer" containerID="a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.836058 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3\": container with ID starting with a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3 not found: ID does not exist" containerID="a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.836079 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3"} err="failed to get container status \"a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3\": rpc error: code = NotFound desc = could not find container \"a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3\": container with ID starting with a7718a70cb7ace0faec55cd3c7efc512f21a009631dd55ff9c6521f3669078d3 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.836093 5049 scope.go:117] "RemoveContainer" containerID="22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.836330 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470\": container with ID starting with 22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470 not found: ID does not exist" containerID="22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.836349 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470"} err="failed to get container status \"22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470\": rpc error: code = NotFound desc = could not find container \"22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470\": container with ID starting with 22d0b0ce240d4ffa1f356f14354013ccc5a82232f0a1c7cab9e0c778a57cf470 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.836361 5049 scope.go:117] "RemoveContainer" containerID="c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.836598 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076\": container with ID starting with c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076 not found: ID does not exist" containerID="c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.836631 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076"} err="failed to get container status \"c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076\": rpc error: code = NotFound desc = could not find container \"c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076\": container with ID starting with c54193743efddc35cc5d308539a2b6ddf4f91a56153042a087edab8b4178d076 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.836643 5049 scope.go:117] "RemoveContainer" containerID="224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.836850 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61\": container with ID starting with 224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61 not found: ID does not exist" containerID="224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.836878 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61"} err="failed to get container status \"224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61\": rpc error: code = NotFound desc = could not find container \"224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61\": container with ID starting with 224833d0edff6b380a8a1b7d43dabc8125423b2aafe62311878605dff679aa61 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.836891 5049 scope.go:117] "RemoveContainer" containerID="03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.837080 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049\": container with ID starting with 03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049 not found: ID does not exist" containerID="03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.837098 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049"} err="failed to get container status \"03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049\": rpc error: code = NotFound desc = could not find container \"03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049\": container with ID starting with 03c47c98df171b7b9208a6248b44605bd8cbfeced5546e25c5829b8a2c6bc049 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.837111 5049 scope.go:117] "RemoveContainer" containerID="a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25" Jan 27 17:21:50 crc kubenswrapper[5049]: E0127 17:21:50.837358 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25\": container with ID starting with a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25 not found: ID does not exist" containerID="a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.837377 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25"} err="failed to get container status \"a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25\": rpc error: code = NotFound desc = could not find container \"a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25\": container with ID starting with a2dfb2a3d54aba3ae55d7279c088ecb8bf8242fef6f3a7d78a6cab7c08d00f25 not found: ID does not exist" Jan 27 17:21:50 crc kubenswrapper[5049]: I0127 17:21:50.837389 5049 scope.go:117] "RemoveContainer" containerID="86ae2065456e0be818d0d2f291c75fa54963a08b110e6a19c05980e7f58e4078" Jan 27 17:21:51 crc kubenswrapper[5049]: I0127 17:21:51.384295 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7s8s5" Jan 27 17:21:51 crc kubenswrapper[5049]: I0127 17:21:51.661818 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" path="/var/lib/kubelet/pods/0af4a67e-8714-4d41-ab32-7b2e526a0799/volumes" Jan 27 17:21:51 crc kubenswrapper[5049]: I0127 17:21:51.665650 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" path="/var/lib/kubelet/pods/0d76a4d6-b3a5-4931-9fb1-13531143ebaa/volumes" Jan 27 17:21:51 crc kubenswrapper[5049]: I0127 17:21:51.705526 5049 scope.go:117] "RemoveContainer" containerID="1ad029fde4bbfe950ea64c277ff2274dbaf9c65f928fb3d6f8c204dfa84b5ab2" Jan 27 17:21:51 crc kubenswrapper[5049]: I0127 17:21:51.731320 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-7s8s5"] Jan 27 17:21:51 crc kubenswrapper[5049]: I0127 17:21:51.741050 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-7s8s5"] Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.101558 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": EOF" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.119225 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-85447fcffb-gb5mq" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.209:9311/healthcheck\": EOF" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.367400 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.403694 5049 generic.go:334] "Generic (PLEG): container finished" podID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerID="5212effe4a5f9b45d0cde4b2a3588776508dc67c73e64ac7ea425d6da44261e9" exitCode=137 Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.403876 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" event={"ID":"7b36a6d6-32ec-4c02-b274-319cb860222c","Type":"ContainerDied","Data":"5212effe4a5f9b45d0cde4b2a3588776508dc67c73e64ac7ea425d6da44261e9"} Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.411232 5049 generic.go:334] "Generic (PLEG): container finished" podID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerID="778a554e4a163939392827258f2231fc8efa1afba442cb645dcf8ea6914e87df" exitCode=137 Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.411292 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85447fcffb-gb5mq" event={"ID":"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52","Type":"ContainerDied","Data":"778a554e4a163939392827258f2231fc8efa1afba442cb645dcf8ea6914e87df"} Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.413579 5049 generic.go:334] "Generic (PLEG): container finished" podID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerID="36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75" exitCode=137 Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.413627 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f58f99c-tq7mf" event={"ID":"e55f335e-88f4-4e41-a177-0771cfd532c4","Type":"ContainerDied","Data":"36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75"} Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.413655 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f58f99c-tq7mf" event={"ID":"e55f335e-88f4-4e41-a177-0771cfd532c4","Type":"ContainerDied","Data":"dc0e83537d03389124c848e9ddbeeea4be2cce802e2b99006646d1207f8bed4f"} Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.413689 5049 scope.go:117] "RemoveContainer" containerID="36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.413833 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9f58f99c-tq7mf" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.444324 5049 scope.go:117] "RemoveContainer" containerID="9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.461241 5049 scope.go:117] "RemoveContainer" containerID="36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75" Jan 27 17:21:52 crc kubenswrapper[5049]: E0127 17:21:52.461606 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75\": container with ID starting with 36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75 not found: ID does not exist" containerID="36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.461648 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75"} err="failed to get container status \"36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75\": rpc error: code = NotFound desc = could not find container \"36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75\": container with ID starting with 36c762f6d6cd4c0f7945ad3864602bd953eb4a082ccd22b801f8f15e74b65c75 not found: ID does not exist" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.461688 5049 scope.go:117] "RemoveContainer" containerID="9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81" Jan 27 17:21:52 crc kubenswrapper[5049]: E0127 17:21:52.461965 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81\": container with ID starting with 9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81 not found: ID does not exist" containerID="9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.461998 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81"} err="failed to get container status \"9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81\": rpc error: code = NotFound desc = could not find container \"9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81\": container with ID starting with 9a3023e42b1b3429592486ac2e6b5978a9de9fd3b130d6fd7a2ccd3cd217fb81 not found: ID does not exist" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.532831 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.542482 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-combined-ca-bundle\") pod \"e55f335e-88f4-4e41-a177-0771cfd532c4\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.542526 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bs9h\" (UniqueName: \"kubernetes.io/projected/e55f335e-88f4-4e41-a177-0771cfd532c4-kube-api-access-8bs9h\") pod \"e55f335e-88f4-4e41-a177-0771cfd532c4\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.542561 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data\") pod \"e55f335e-88f4-4e41-a177-0771cfd532c4\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.542596 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data-custom\") pod \"e55f335e-88f4-4e41-a177-0771cfd532c4\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.542740 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55f335e-88f4-4e41-a177-0771cfd532c4-logs\") pod \"e55f335e-88f4-4e41-a177-0771cfd532c4\" (UID: \"e55f335e-88f4-4e41-a177-0771cfd532c4\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.543846 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e55f335e-88f4-4e41-a177-0771cfd532c4-logs" (OuterVolumeSpecName: "logs") pod "e55f335e-88f4-4e41-a177-0771cfd532c4" (UID: "e55f335e-88f4-4e41-a177-0771cfd532c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.547971 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e55f335e-88f4-4e41-a177-0771cfd532c4" (UID: "e55f335e-88f4-4e41-a177-0771cfd532c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.557207 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55f335e-88f4-4e41-a177-0771cfd532c4-kube-api-access-8bs9h" (OuterVolumeSpecName: "kube-api-access-8bs9h") pod "e55f335e-88f4-4e41-a177-0771cfd532c4" (UID: "e55f335e-88f4-4e41-a177-0771cfd532c4"). InnerVolumeSpecName "kube-api-access-8bs9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.569727 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e55f335e-88f4-4e41-a177-0771cfd532c4" (UID: "e55f335e-88f4-4e41-a177-0771cfd532c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.579271 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.589627 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data" (OuterVolumeSpecName: "config-data") pod "e55f335e-88f4-4e41-a177-0771cfd532c4" (UID: "e55f335e-88f4-4e41-a177-0771cfd532c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.644175 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data-custom\") pod \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.644238 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsq5s\" (UniqueName: \"kubernetes.io/projected/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-kube-api-access-nsq5s\") pod \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.644263 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data\") pod \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.644329 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-public-tls-certs\") pod \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.644348 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-logs\") pod \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.644376 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-combined-ca-bundle\") pod \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.644400 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-internal-tls-certs\") pod \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\" (UID: \"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.645093 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e55f335e-88f4-4e41-a177-0771cfd532c4-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.645110 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.645123 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bs9h\" (UniqueName: \"kubernetes.io/projected/e55f335e-88f4-4e41-a177-0771cfd532c4-kube-api-access-8bs9h\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.645136 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.645147 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e55f335e-88f4-4e41-a177-0771cfd532c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.645200 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-logs" (OuterVolumeSpecName: "logs") pod "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" (UID: "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.647309 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" (UID: "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.647348 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-kube-api-access-nsq5s" (OuterVolumeSpecName: "kube-api-access-nsq5s") pod "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" (UID: "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52"). InnerVolumeSpecName "kube-api-access-nsq5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.665258 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" (UID: "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.679909 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" (UID: "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.681219 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data" (OuterVolumeSpecName: "config-data") pod "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" (UID: "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.692583 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" (UID: "6382e0b5-3cd0-484c-a75e-57f7f6c8fb52"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.745837 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-combined-ca-bundle\") pod \"7b36a6d6-32ec-4c02-b274-319cb860222c\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.745983 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k28s7\" (UniqueName: \"kubernetes.io/projected/7b36a6d6-32ec-4c02-b274-319cb860222c-kube-api-access-k28s7\") pod \"7b36a6d6-32ec-4c02-b274-319cb860222c\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.746131 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data-custom\") pod \"7b36a6d6-32ec-4c02-b274-319cb860222c\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.747300 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b36a6d6-32ec-4c02-b274-319cb860222c-logs\") pod \"7b36a6d6-32ec-4c02-b274-319cb860222c\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.747455 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data\") pod \"7b36a6d6-32ec-4c02-b274-319cb860222c\" (UID: \"7b36a6d6-32ec-4c02-b274-319cb860222c\") " Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.748827 5049 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.748866 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.748899 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.748915 5049 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.748932 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.748947 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsq5s\" (UniqueName: \"kubernetes.io/projected/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-kube-api-access-nsq5s\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.748964 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.750318 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b36a6d6-32ec-4c02-b274-319cb860222c-logs" (OuterVolumeSpecName: "logs") pod "7b36a6d6-32ec-4c02-b274-319cb860222c" (UID: "7b36a6d6-32ec-4c02-b274-319cb860222c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.752765 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b36a6d6-32ec-4c02-b274-319cb860222c-kube-api-access-k28s7" (OuterVolumeSpecName: "kube-api-access-k28s7") pod "7b36a6d6-32ec-4c02-b274-319cb860222c" (UID: "7b36a6d6-32ec-4c02-b274-319cb860222c"). InnerVolumeSpecName "kube-api-access-k28s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.759214 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7b36a6d6-32ec-4c02-b274-319cb860222c" (UID: "7b36a6d6-32ec-4c02-b274-319cb860222c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.762966 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-c9f58f99c-tq7mf"] Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.770039 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-c9f58f99c-tq7mf"] Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.790103 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b36a6d6-32ec-4c02-b274-319cb860222c" (UID: "7b36a6d6-32ec-4c02-b274-319cb860222c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.807948 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data" (OuterVolumeSpecName: "config-data") pod "7b36a6d6-32ec-4c02-b274-319cb860222c" (UID: "7b36a6d6-32ec-4c02-b274-319cb860222c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.850042 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.850213 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b36a6d6-32ec-4c02-b274-319cb860222c-logs\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.850332 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.850443 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b36a6d6-32ec-4c02-b274-319cb860222c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:52 crc kubenswrapper[5049]: I0127 17:21:52.850636 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k28s7\" (UniqueName: \"kubernetes.io/projected/7b36a6d6-32ec-4c02-b274-319cb860222c-kube-api-access-k28s7\") on node \"crc\" DevicePath \"\"" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.429496 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" event={"ID":"7b36a6d6-32ec-4c02-b274-319cb860222c","Type":"ContainerDied","Data":"cd9b8f8965f081ff7d30e8ddfa484f660c50c5fdc8238e8e41dc887200b46f2e"} Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.429899 5049 scope.go:117] "RemoveContainer" containerID="5212effe4a5f9b45d0cde4b2a3588776508dc67c73e64ac7ea425d6da44261e9" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.429548 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575b565ff8-wcjw4" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.434495 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85447fcffb-gb5mq" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.434576 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85447fcffb-gb5mq" event={"ID":"6382e0b5-3cd0-484c-a75e-57f7f6c8fb52","Type":"ContainerDied","Data":"7c03e3771bf1bd786586f54ab25dcca250b4517db17ad28d3eef190ed8cdd2c1"} Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.469609 5049 scope.go:117] "RemoveContainer" containerID="7560c386262e552991ab259cf0a76b6d1070d688c0810ffc7b01a4e88c45247b" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.487523 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-575b565ff8-wcjw4"] Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.500759 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-575b565ff8-wcjw4"] Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.508019 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-85447fcffb-gb5mq"] Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.515125 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-85447fcffb-gb5mq"] Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.527795 5049 scope.go:117] "RemoveContainer" containerID="778a554e4a163939392827258f2231fc8efa1afba442cb645dcf8ea6914e87df" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.553049 5049 scope.go:117] "RemoveContainer" containerID="31532940e27611eb351095fe8d11a748df2424f7203a4ece2b697b34fe6f40f7" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.666147 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" path="/var/lib/kubelet/pods/009eaa47-1d7c-46e6-aeea-b25f77ea35a9/volumes" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.667983 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" path="/var/lib/kubelet/pods/6382e0b5-3cd0-484c-a75e-57f7f6c8fb52/volumes" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.669382 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" path="/var/lib/kubelet/pods/7b36a6d6-32ec-4c02-b274-319cb860222c/volumes" Jan 27 17:21:53 crc kubenswrapper[5049]: I0127 17:21:53.671622 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" path="/var/lib/kubelet/pods/e55f335e-88f4-4e41-a177-0771cfd532c4/volumes" Jan 27 17:21:54 crc kubenswrapper[5049]: I0127 17:21:54.049564 5049 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podf2cc976d-73bd-4d16-a1f6-84108954384f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podf2cc976d-73bd-4d16-a1f6-84108954384f] : Timed out while waiting for systemd to remove kubepods-besteffort-podf2cc976d_73bd_4d16_a1f6_84108954384f.slice" Jan 27 17:21:54 crc kubenswrapper[5049]: E0127 17:21:54.049641 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podf2cc976d-73bd-4d16-a1f6-84108954384f] : unable to destroy cgroup paths for cgroup [kubepods besteffort podf2cc976d-73bd-4d16-a1f6-84108954384f] : Timed out while waiting for systemd to remove kubepods-besteffort-podf2cc976d_73bd_4d16_a1f6_84108954384f.slice" pod="openstack/root-account-create-update-9swgr" podUID="f2cc976d-73bd-4d16-a1f6-84108954384f" Jan 27 17:21:54 crc kubenswrapper[5049]: I0127 17:21:54.063751 5049 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod81cf45aa-76f9-41d4-9385-7796174601b0"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod81cf45aa-76f9-41d4-9385-7796174601b0] : Timed out while waiting for systemd to remove kubepods-besteffort-pod81cf45aa_76f9_41d4_9385_7796174601b0.slice" Jan 27 17:21:54 crc kubenswrapper[5049]: E0127 17:21:54.063810 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod81cf45aa-76f9-41d4-9385-7796174601b0] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod81cf45aa-76f9-41d4-9385-7796174601b0] : Timed out while waiting for systemd to remove kubepods-besteffort-pod81cf45aa_76f9_41d4_9385_7796174601b0.slice" pod="openstack/nova-cell0-80af-account-create-update-f778m" podUID="81cf45aa-76f9-41d4-9385-7796174601b0" Jan 27 17:21:54 crc kubenswrapper[5049]: I0127 17:21:54.460626 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-80af-account-create-update-f778m" Jan 27 17:21:54 crc kubenswrapper[5049]: I0127 17:21:54.460626 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9swgr" Jan 27 17:21:54 crc kubenswrapper[5049]: I0127 17:21:54.494866 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9swgr"] Jan 27 17:21:54 crc kubenswrapper[5049]: I0127 17:21:54.513665 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9swgr"] Jan 27 17:21:54 crc kubenswrapper[5049]: I0127 17:21:54.526347 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-f778m"] Jan 27 17:21:54 crc kubenswrapper[5049]: I0127 17:21:54.530541 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-80af-account-create-update-f778m"] Jan 27 17:21:55 crc kubenswrapper[5049]: I0127 17:21:55.667712 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81cf45aa-76f9-41d4-9385-7796174601b0" path="/var/lib/kubelet/pods/81cf45aa-76f9-41d4-9385-7796174601b0/volumes" Jan 27 17:21:55 crc kubenswrapper[5049]: I0127 17:21:55.668831 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2cc976d-73bd-4d16-a1f6-84108954384f" path="/var/lib/kubelet/pods/f2cc976d-73bd-4d16-a1f6-84108954384f/volumes" Jan 27 17:21:57 crc kubenswrapper[5049]: I0127 17:21:57.018810 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/keystone-679b885964-9p8nj" podUID="c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" containerName="keystone-api" probeResult="failure" output="Get \"https://10.217.0.150:5000/v3\": dial tcp 10.217.0.150:5000: i/o timeout" Jan 27 17:21:57 crc kubenswrapper[5049]: I0127 17:21:57.730040 5049 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","poddbb24b4b-dfbd-431f-8244-098c40f7c24f"] err="unable to destroy cgroup paths for cgroup [kubepods burstable poddbb24b4b-dfbd-431f-8244-098c40f7c24f] : Timed out while waiting for systemd to remove kubepods-burstable-poddbb24b4b_dfbd_431f_8244_098c40f7c24f.slice" Jan 27 17:21:57 crc kubenswrapper[5049]: E0127 17:21:57.730390 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable poddbb24b4b-dfbd-431f-8244-098c40f7c24f] : unable to destroy cgroup paths for cgroup [kubepods burstable poddbb24b4b-dfbd-431f-8244-098c40f7c24f] : Timed out while waiting for systemd to remove kubepods-burstable-poddbb24b4b_dfbd_431f_8244_098c40f7c24f.slice" pod="openstack/rabbitmq-cell1-server-0" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" Jan 27 17:21:58 crc kubenswrapper[5049]: I0127 17:21:58.503245 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 17:21:58 crc kubenswrapper[5049]: I0127 17:21:58.540603 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 17:21:58 crc kubenswrapper[5049]: I0127 17:21:58.550772 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 17:21:59 crc kubenswrapper[5049]: I0127 17:21:59.661224 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" path="/var/lib/kubelet/pods/dbb24b4b-dfbd-431f-8244-098c40f7c24f/volumes" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.122166 5049 scope.go:117] "RemoveContainer" containerID="e149f916270c7d8e961b7164f68aa366540d8c58dc14d2240d6b3bb65c9a6dd5" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.178397 5049 scope.go:117] "RemoveContainer" containerID="626c86acb733344d07e343f4289761a9f30520eda1c48c93eebace6d3cdd0601" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.214222 5049 scope.go:117] "RemoveContainer" containerID="4e0471d1cfa916ece5f9eda6fe5b911bf3e0ffdf03d02351818d446d70fa2cf5" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.270028 5049 scope.go:117] "RemoveContainer" containerID="7ff2b20680587ac503ca41e9a990c3073fbbedfa45194f1c414afb82e3c9c863" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.302754 5049 scope.go:117] "RemoveContainer" containerID="08aeaa3adcbe3d328a7df12c2e74a48b26c91b57eb23db104799654eb3b9e3e4" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.349834 5049 scope.go:117] "RemoveContainer" containerID="eb38fd215d77c9573ef1e1ca9a9a7e1ac4fc553b5e11691ff33926e5721f8fc7" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.376626 5049 scope.go:117] "RemoveContainer" containerID="5c3c67279357ddab656f96c0c019ac6843e4ad85d5a6014a0ba499293d60cbb2" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.394662 5049 scope.go:117] "RemoveContainer" containerID="1b6f6c5a2dbd10576d39cf66ace86b732d5a6d3d4205d639f93906059ad5f4f4" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.408415 5049 scope.go:117] "RemoveContainer" containerID="21f5e0dc07fb3d38bb7e19a51a4c0dbf807f4111a67586e4958d5638d23ad1b4" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.434870 5049 scope.go:117] "RemoveContainer" containerID="8535904c57b170344be1f5cca8b6294c359e2cef513852c66525957473fdeee9" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.454126 5049 scope.go:117] "RemoveContainer" containerID="c2421c867eac1f26e65a87762362cc494f6fe812990d0919eaa0fb9275c647d4" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.470749 5049 scope.go:117] "RemoveContainer" containerID="83f6029dc42a366c9752e1aa6f03886c6ce220b7fe1e10f9085bca7560faa674" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.502201 5049 scope.go:117] "RemoveContainer" containerID="5e17a6ba9e15c2c27dc5039bb862db1d335a29b0f52813e7652909d081479ad1" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.542879 5049 scope.go:117] "RemoveContainer" containerID="fb5db9a00e5113f7d25d292ca3886b1f41516fc87005b0634e2553727856eb8a" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.576391 5049 scope.go:117] "RemoveContainer" containerID="eb7bb6bdc696d16a9391e8b7d03b5fc3378a5cabbebba4c36fb5b1740306e76c" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.601991 5049 scope.go:117] "RemoveContainer" containerID="286f2a38cacef957dcac53193780afbff30763c14e205700071bc15be49d04a5" Jan 27 17:23:12 crc kubenswrapper[5049]: I0127 17:23:12.620726 5049 scope.go:117] "RemoveContainer" containerID="c81601cbfa3e2090ea7c52671baa125f1040d78ee414966c3e9c6e687d304585" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.994567 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-82hll"] Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995397 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4b4464-1c98-412b-96cf-235908a4eaf6" containerName="nova-cell0-conductor-conductor" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995422 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4b4464-1c98-412b-96cf-235908a4eaf6" containerName="nova-cell0-conductor-conductor" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995437 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api-log" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995453 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api-log" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995470 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-replicator" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995483 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-replicator" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995501 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-expirer" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995514 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-expirer" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995552 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-replicator" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995569 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-replicator" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995598 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-api" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995616 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-api" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995644 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de39a65a-7265-4418-a94b-f8f8f30c3807" containerName="mysql-bootstrap" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995657 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="de39a65a-7265-4418-a94b-f8f8f30c3807" containerName="mysql-bootstrap" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995710 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-replicator" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995729 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-replicator" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995747 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-metadata" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995760 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-metadata" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995786 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerName="barbican-keystone-listener-log" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995799 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerName="barbican-keystone-listener-log" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995828 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="ovn-northd" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995845 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="ovn-northd" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995877 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-reaper" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995893 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-reaper" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995915 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-auditor" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.995933 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-auditor" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.995991 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerName="barbican-worker" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996012 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerName="barbican-worker" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996038 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerName="proxy-server" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996055 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerName="proxy-server" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996078 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerName="barbican-keystone-listener" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996096 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerName="barbican-keystone-listener" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996117 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-server" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996137 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-server" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996170 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerName="barbican-worker-log" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996186 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerName="barbican-worker-log" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996215 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerName="glance-httpd" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996232 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerName="glance-httpd" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996254 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerName="probe" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996270 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerName="probe" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996294 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="rsync" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996310 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="rsync" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996334 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85620b2d-c74a-4c51-8129-c747016dc357" containerName="placement-log" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996351 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="85620b2d-c74a-4c51-8129-c747016dc357" containerName="placement-log" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996370 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85620b2d-c74a-4c51-8129-c747016dc357" containerName="placement-api" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996387 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="85620b2d-c74a-4c51-8129-c747016dc357" containerName="placement-api" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996422 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de39a65a-7265-4418-a94b-f8f8f30c3807" containerName="galera" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996440 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="de39a65a-7265-4418-a94b-f8f8f30c3807" containerName="galera" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996461 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996480 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996499 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server-init" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996521 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server-init" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996548 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="proxy-httpd" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996565 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="proxy-httpd" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996591 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="sg-core" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996607 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="sg-core" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996636 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerName="setup-container" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996654 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerName="setup-container" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996719 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996739 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996757 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerName="barbican-worker-log" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996773 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerName="barbican-worker-log" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996793 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerName="barbican-keystone-listener-log" Jan 27 17:23:13 crc kubenswrapper[5049]: I0127 17:23:13.996816 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerName="barbican-keystone-listener-log" Jan 27 17:23:13 crc kubenswrapper[5049]: E0127 17:23:13.996844 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerName="barbican-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.996863 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerName="barbican-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.996889 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="swift-recon-cron" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.996907 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="swift-recon-cron" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.996927 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerName="rabbitmq" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.996944 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerName="rabbitmq" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.996972 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerName="glance-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.996991 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerName="glance-log" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997013 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997031 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997059 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28327cb6-87a9-4b24-b8fb-f43c33076b1b" containerName="memcached" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997076 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="28327cb6-87a9-4b24-b8fb-f43c33076b1b" containerName="memcached" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997102 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerName="setup-container" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997122 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerName="setup-container" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997154 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-auditor" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997171 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-auditor" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997203 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" containerName="keystone-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997222 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" containerName="keystone-api" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997246 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997265 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-api" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997300 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997319 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997341 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="ceilometer-central-agent" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997359 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="ceilometer-central-agent" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997388 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9edd1d0-64dc-4c83-9149-04c772e4e517" containerName="nova-scheduler-scheduler" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997406 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9edd1d0-64dc-4c83-9149-04c772e4e517" containerName="nova-scheduler-scheduler" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997429 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997447 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997470 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerName="rabbitmq" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997486 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerName="rabbitmq" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997511 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerName="cinder-scheduler" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997527 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerName="cinder-scheduler" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997540 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerName="proxy-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997553 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerName="proxy-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997579 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997596 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997625 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-auditor" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997642 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-auditor" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997668 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997723 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-log" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997751 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-updater" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997766 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-updater" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997786 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerName="barbican-worker" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997799 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerName="barbican-worker" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997823 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22dc9694-6c5e-4ac3-99e3-910dac92573a" containerName="nova-cell1-conductor-conductor" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997836 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="22dc9694-6c5e-4ac3-99e3-910dac92573a" containerName="nova-cell1-conductor-conductor" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997853 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-updater" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997866 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-updater" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997881 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997893 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997911 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b915091f-1f89-4602-8b1f-2214883644e0" containerName="kube-state-metrics" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997924 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b915091f-1f89-4602-8b1f-2214883644e0" containerName="kube-state-metrics" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997948 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerName="barbican-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997960 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerName="barbican-api" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.997976 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerName="glance-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.997989 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerName="glance-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.998009 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998022 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-server" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.998038 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerName="barbican-keystone-listener" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998051 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerName="barbican-keystone-listener" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.998073 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="openstack-network-exporter" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998086 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="openstack-network-exporter" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.998110 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerName="glance-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998123 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerName="glance-log" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.998139 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998152 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.998173 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="ceilometer-notification-agent" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998186 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="ceilometer-notification-agent" Jan 27 17:23:14 crc kubenswrapper[5049]: E0127 17:23:13.998201 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998213 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998478 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="389cf061-3e03-4e54-bf97-c88a747fd18b" containerName="ovn-controller" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998501 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-auditor" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998518 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerName="barbican-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998539 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4b4464-1c98-412b-96cf-235908a4eaf6" containerName="nova-cell0-conductor-conductor" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998554 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998723 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-replicator" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998744 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-metadata" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998759 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="swift-recon-cron" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998779 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-expirer" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998803 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovsdb-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998823 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerName="barbican-keystone-listener" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998840 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="ceilometer-notification-agent" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998855 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-auditor" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998871 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerName="proxy-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998892 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="openstack-network-exporter" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998910 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998930 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerName="glance-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998949 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998963 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="85620b2d-c74a-4c51-8129-c747016dc357" containerName="placement-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.998984 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="85620b2d-c74a-4c51-8129-c747016dc357" containerName="placement-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999002 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e0b118e-d036-4ae2-ac85-5ab90eeea2f5" containerName="neutron-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999017 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8752fa-c3a1-4eba-91dc-6af200eb8168" containerName="barbican-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999038 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="sg-core" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999057 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e00689-0036-4c1b-84ee-d4f97cfe2d3e" containerName="keystone-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999077 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-replicator" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999095 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerName="glance-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999142 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="009eaa47-1d7c-46e6-aeea-b25f77ea35a9" containerName="ovs-vswitchd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999174 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb24b4b-dfbd-431f-8244-098c40f7c24f" containerName="rabbitmq" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999196 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89c9402-b4c3-4180-8a61-9e63497ebb66" containerName="glance-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999227 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9edd1d0-64dc-4c83-9149-04c772e4e517" containerName="nova-scheduler-scheduler" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999252 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="294e84c0-d49f-4e45-87d5-085c7accf51e" containerName="nova-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999279 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b36a6d6-32ec-4c02-b274-319cb860222c" containerName="barbican-keystone-listener-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999310 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerName="barbican-worker" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999346 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a923a49d-7e17-40a5-975a-9f4a39f92d51" containerName="proxy-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999377 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="proxy-httpd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999397 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="051db122-80f6-47fc-8d5c-5244d92e593d" containerName="ovn-northd" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999421 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerName="barbican-keystone-listener" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999443 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-updater" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999468 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerName="barbican-worker" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999493 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55f335e-88f4-4e41-a177-0771cfd532c4" containerName="barbican-worker-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999515 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfa2378-a75a-41b5-9ea9-71c8da89f750" containerName="barbican-keystone-listener-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999532 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="492cb82e-33fb-4fc7-85e2-7d4285e5ff00" containerName="cinder-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999552 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="de39a65a-7265-4418-a94b-f8f8f30c3807" containerName="galera" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:13.999570 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000175 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="59384c20-c0a3-4524-9ddb-407b96e8f882" containerName="glance-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000191 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b92aaa-ae4b-41ba-bd72-5e6d01518000" containerName="ceilometer-central-agent" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000210 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="65eb2d0b-ab1a-4a97-afdc-73592ac6cb29" containerName="nova-metadata-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000235 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ad8919-34a1-4d3c-8f82-a8902bc857ff" containerName="barbican-worker-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000256 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-auditor" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000275 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ffcfe9-3e93-48ee-8d03-9b653d1bfede" containerName="rabbitmq" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000292 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerName="cinder-scheduler" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000314 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="22dc9694-6c5e-4ac3-99e3-910dac92573a" containerName="nova-cell1-conductor-conductor" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000330 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000345 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d76a4d6-b3a5-4931-9fb1-13531143ebaa" containerName="probe" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000360 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api-log" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000376 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b915091f-1f89-4602-8b1f-2214883644e0" containerName="kube-state-metrics" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000396 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="28327cb6-87a9-4b24-b8fb-f43c33076b1b" containerName="memcached" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000418 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="rsync" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000438 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="account-reaper" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000454 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="container-server" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000474 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-updater" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000495 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af4a67e-8714-4d41-ab32-7b2e526a0799" containerName="object-replicator" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.000511 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6382e0b5-3cd0-484c-a75e-57f7f6c8fb52" containerName="barbican-api" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.003067 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.023213 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82hll"] Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.107666 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-utilities\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.107766 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-catalog-content\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.107798 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8cpf\" (UniqueName: \"kubernetes.io/projected/67172a8b-7dd6-45cc-95db-dfbdb10d8071-kube-api-access-n8cpf\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.210265 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-utilities\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.210373 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-catalog-content\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.210422 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8cpf\" (UniqueName: \"kubernetes.io/projected/67172a8b-7dd6-45cc-95db-dfbdb10d8071-kube-api-access-n8cpf\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.210873 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-utilities\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.210996 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-catalog-content\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.233502 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8cpf\" (UniqueName: \"kubernetes.io/projected/67172a8b-7dd6-45cc-95db-dfbdb10d8071-kube-api-access-n8cpf\") pod \"redhat-marketplace-82hll\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.343164 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:14 crc kubenswrapper[5049]: I0127 17:23:14.846348 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82hll"] Jan 27 17:23:15 crc kubenswrapper[5049]: I0127 17:23:15.365460 5049 generic.go:334] "Generic (PLEG): container finished" podID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerID="4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4" exitCode=0 Jan 27 17:23:15 crc kubenswrapper[5049]: I0127 17:23:15.365540 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hll" event={"ID":"67172a8b-7dd6-45cc-95db-dfbdb10d8071","Type":"ContainerDied","Data":"4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4"} Jan 27 17:23:15 crc kubenswrapper[5049]: I0127 17:23:15.365892 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hll" event={"ID":"67172a8b-7dd6-45cc-95db-dfbdb10d8071","Type":"ContainerStarted","Data":"48f27573fe7e1db1fa85aa4afe7aebf5c77865fb1546174767e5bf6e444f8537"} Jan 27 17:23:15 crc kubenswrapper[5049]: I0127 17:23:15.368477 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:23:15 crc kubenswrapper[5049]: I0127 17:23:15.967113 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tk4h4"] Jan 27 17:23:15 crc kubenswrapper[5049]: I0127 17:23:15.974354 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:15 crc kubenswrapper[5049]: I0127 17:23:15.976451 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tk4h4"] Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.047791 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-utilities\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.047882 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz4p9\" (UniqueName: \"kubernetes.io/projected/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-kube-api-access-bz4p9\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.047948 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-catalog-content\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.148737 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-utilities\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.148788 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz4p9\" (UniqueName: \"kubernetes.io/projected/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-kube-api-access-bz4p9\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.148813 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-catalog-content\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.149188 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-utilities\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.149254 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-catalog-content\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.178810 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz4p9\" (UniqueName: \"kubernetes.io/projected/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-kube-api-access-bz4p9\") pod \"certified-operators-tk4h4\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.339572 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.376259 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hll" event={"ID":"67172a8b-7dd6-45cc-95db-dfbdb10d8071","Type":"ContainerStarted","Data":"31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b"} Jan 27 17:23:16 crc kubenswrapper[5049]: W0127 17:23:16.812986 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5fba9f2_8ed7_469d_b6eb_0487061f2e85.slice/crio-75ebb3c37ba44fc6a308a9de7b35106a77e19501116d0bea9c756f19eae6e63b WatchSource:0}: Error finding container 75ebb3c37ba44fc6a308a9de7b35106a77e19501116d0bea9c756f19eae6e63b: Status 404 returned error can't find the container with id 75ebb3c37ba44fc6a308a9de7b35106a77e19501116d0bea9c756f19eae6e63b Jan 27 17:23:16 crc kubenswrapper[5049]: I0127 17:23:16.816419 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tk4h4"] Jan 27 17:23:17 crc kubenswrapper[5049]: I0127 17:23:17.392111 5049 generic.go:334] "Generic (PLEG): container finished" podID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerID="31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b" exitCode=0 Jan 27 17:23:17 crc kubenswrapper[5049]: I0127 17:23:17.392160 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hll" event={"ID":"67172a8b-7dd6-45cc-95db-dfbdb10d8071","Type":"ContainerDied","Data":"31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b"} Jan 27 17:23:17 crc kubenswrapper[5049]: I0127 17:23:17.394767 5049 generic.go:334] "Generic (PLEG): container finished" podID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerID="f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3" exitCode=0 Jan 27 17:23:17 crc kubenswrapper[5049]: I0127 17:23:17.394806 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk4h4" event={"ID":"c5fba9f2-8ed7-469d-b6eb-0487061f2e85","Type":"ContainerDied","Data":"f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3"} Jan 27 17:23:17 crc kubenswrapper[5049]: I0127 17:23:17.394839 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk4h4" event={"ID":"c5fba9f2-8ed7-469d-b6eb-0487061f2e85","Type":"ContainerStarted","Data":"75ebb3c37ba44fc6a308a9de7b35106a77e19501116d0bea9c756f19eae6e63b"} Jan 27 17:23:18 crc kubenswrapper[5049]: I0127 17:23:18.409776 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hll" event={"ID":"67172a8b-7dd6-45cc-95db-dfbdb10d8071","Type":"ContainerStarted","Data":"a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39"} Jan 27 17:23:18 crc kubenswrapper[5049]: I0127 17:23:18.434753 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-82hll" podStartSLOduration=3.004916552 podStartE2EDuration="5.434738217s" podCreationTimestamp="2026-01-27 17:23:13 +0000 UTC" firstStartedPulling="2026-01-27 17:23:15.368058789 +0000 UTC m=+1570.467032378" lastFinishedPulling="2026-01-27 17:23:17.797880484 +0000 UTC m=+1572.896854043" observedRunningTime="2026-01-27 17:23:18.429301391 +0000 UTC m=+1573.528274940" watchObservedRunningTime="2026-01-27 17:23:18.434738217 +0000 UTC m=+1573.533711766" Jan 27 17:23:19 crc kubenswrapper[5049]: I0127 17:23:19.420904 5049 generic.go:334] "Generic (PLEG): container finished" podID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerID="e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6" exitCode=0 Jan 27 17:23:19 crc kubenswrapper[5049]: I0127 17:23:19.421027 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk4h4" event={"ID":"c5fba9f2-8ed7-469d-b6eb-0487061f2e85","Type":"ContainerDied","Data":"e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6"} Jan 27 17:23:20 crc kubenswrapper[5049]: I0127 17:23:20.438401 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk4h4" event={"ID":"c5fba9f2-8ed7-469d-b6eb-0487061f2e85","Type":"ContainerStarted","Data":"628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb"} Jan 27 17:23:20 crc kubenswrapper[5049]: I0127 17:23:20.460942 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tk4h4" podStartSLOduration=3.043016486 podStartE2EDuration="5.460921517s" podCreationTimestamp="2026-01-27 17:23:15 +0000 UTC" firstStartedPulling="2026-01-27 17:23:17.398035999 +0000 UTC m=+1572.497009588" lastFinishedPulling="2026-01-27 17:23:19.81594107 +0000 UTC m=+1574.914914619" observedRunningTime="2026-01-27 17:23:20.455012567 +0000 UTC m=+1575.553986116" watchObservedRunningTime="2026-01-27 17:23:20.460921517 +0000 UTC m=+1575.559895066" Jan 27 17:23:24 crc kubenswrapper[5049]: I0127 17:23:24.344199 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:24 crc kubenswrapper[5049]: I0127 17:23:24.344603 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:24 crc kubenswrapper[5049]: I0127 17:23:24.415614 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:24 crc kubenswrapper[5049]: I0127 17:23:24.532447 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:24 crc kubenswrapper[5049]: I0127 17:23:24.668113 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82hll"] Jan 27 17:23:26 crc kubenswrapper[5049]: I0127 17:23:26.340404 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:26 crc kubenswrapper[5049]: I0127 17:23:26.340956 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:26 crc kubenswrapper[5049]: I0127 17:23:26.416029 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:26 crc kubenswrapper[5049]: I0127 17:23:26.494757 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-82hll" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerName="registry-server" containerID="cri-o://a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39" gracePeriod=2 Jan 27 17:23:26 crc kubenswrapper[5049]: I0127 17:23:26.564574 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:26 crc kubenswrapper[5049]: I0127 17:23:26.990967 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.068363 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tk4h4"] Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.124614 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-catalog-content\") pod \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.124676 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8cpf\" (UniqueName: \"kubernetes.io/projected/67172a8b-7dd6-45cc-95db-dfbdb10d8071-kube-api-access-n8cpf\") pod \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.124709 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-utilities\") pod \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\" (UID: \"67172a8b-7dd6-45cc-95db-dfbdb10d8071\") " Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.127869 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-utilities" (OuterVolumeSpecName: "utilities") pod "67172a8b-7dd6-45cc-95db-dfbdb10d8071" (UID: "67172a8b-7dd6-45cc-95db-dfbdb10d8071"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.131927 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67172a8b-7dd6-45cc-95db-dfbdb10d8071-kube-api-access-n8cpf" (OuterVolumeSpecName: "kube-api-access-n8cpf") pod "67172a8b-7dd6-45cc-95db-dfbdb10d8071" (UID: "67172a8b-7dd6-45cc-95db-dfbdb10d8071"). InnerVolumeSpecName "kube-api-access-n8cpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.149336 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67172a8b-7dd6-45cc-95db-dfbdb10d8071" (UID: "67172a8b-7dd6-45cc-95db-dfbdb10d8071"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.225723 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.225755 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8cpf\" (UniqueName: \"kubernetes.io/projected/67172a8b-7dd6-45cc-95db-dfbdb10d8071-kube-api-access-n8cpf\") on node \"crc\" DevicePath \"\"" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.225765 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67172a8b-7dd6-45cc-95db-dfbdb10d8071-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.507705 5049 generic.go:334] "Generic (PLEG): container finished" podID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerID="a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39" exitCode=0 Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.507802 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hll" event={"ID":"67172a8b-7dd6-45cc-95db-dfbdb10d8071","Type":"ContainerDied","Data":"a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39"} Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.507884 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82hll" event={"ID":"67172a8b-7dd6-45cc-95db-dfbdb10d8071","Type":"ContainerDied","Data":"48f27573fe7e1db1fa85aa4afe7aebf5c77865fb1546174767e5bf6e444f8537"} Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.507917 5049 scope.go:117] "RemoveContainer" containerID="a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.508876 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82hll" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.552135 5049 scope.go:117] "RemoveContainer" containerID="31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.557879 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82hll"] Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.566161 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-82hll"] Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.596558 5049 scope.go:117] "RemoveContainer" containerID="4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.624584 5049 scope.go:117] "RemoveContainer" containerID="a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39" Jan 27 17:23:27 crc kubenswrapper[5049]: E0127 17:23:27.625866 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39\": container with ID starting with a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39 not found: ID does not exist" containerID="a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.626018 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39"} err="failed to get container status \"a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39\": rpc error: code = NotFound desc = could not find container \"a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39\": container with ID starting with a102f0729e0361767dfb557d37b996b85843e4c02366717d51d1c46821579e39 not found: ID does not exist" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.626130 5049 scope.go:117] "RemoveContainer" containerID="31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b" Jan 27 17:23:27 crc kubenswrapper[5049]: E0127 17:23:27.626661 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b\": container with ID starting with 31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b not found: ID does not exist" containerID="31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.626756 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b"} err="failed to get container status \"31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b\": rpc error: code = NotFound desc = could not find container \"31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b\": container with ID starting with 31a9e4af21e1a0a80800076ce1b08b424a32bc2eff20db09f312b41ff6f3427b not found: ID does not exist" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.626794 5049 scope.go:117] "RemoveContainer" containerID="4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4" Jan 27 17:23:27 crc kubenswrapper[5049]: E0127 17:23:27.627541 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4\": container with ID starting with 4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4 not found: ID does not exist" containerID="4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.627648 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4"} err="failed to get container status \"4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4\": rpc error: code = NotFound desc = could not find container \"4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4\": container with ID starting with 4457ab43a04d6c8d81739b67f0e693c2831ee1e53e701c67a6c2db07beaae3c4 not found: ID does not exist" Jan 27 17:23:27 crc kubenswrapper[5049]: I0127 17:23:27.659045 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" path="/var/lib/kubelet/pods/67172a8b-7dd6-45cc-95db-dfbdb10d8071/volumes" Jan 27 17:23:28 crc kubenswrapper[5049]: I0127 17:23:28.519268 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tk4h4" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerName="registry-server" containerID="cri-o://628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb" gracePeriod=2 Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.009292 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.155160 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-catalog-content\") pod \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.155307 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-utilities\") pod \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.155376 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz4p9\" (UniqueName: \"kubernetes.io/projected/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-kube-api-access-bz4p9\") pod \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\" (UID: \"c5fba9f2-8ed7-469d-b6eb-0487061f2e85\") " Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.156318 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-utilities" (OuterVolumeSpecName: "utilities") pod "c5fba9f2-8ed7-469d-b6eb-0487061f2e85" (UID: "c5fba9f2-8ed7-469d-b6eb-0487061f2e85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.160315 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-kube-api-access-bz4p9" (OuterVolumeSpecName: "kube-api-access-bz4p9") pod "c5fba9f2-8ed7-469d-b6eb-0487061f2e85" (UID: "c5fba9f2-8ed7-469d-b6eb-0487061f2e85"). InnerVolumeSpecName "kube-api-access-bz4p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.213307 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5fba9f2-8ed7-469d-b6eb-0487061f2e85" (UID: "c5fba9f2-8ed7-469d-b6eb-0487061f2e85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.256929 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.256984 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz4p9\" (UniqueName: \"kubernetes.io/projected/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-kube-api-access-bz4p9\") on node \"crc\" DevicePath \"\"" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.257007 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5fba9f2-8ed7-469d-b6eb-0487061f2e85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.533923 5049 generic.go:334] "Generic (PLEG): container finished" podID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerID="628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb" exitCode=0 Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.533993 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk4h4" event={"ID":"c5fba9f2-8ed7-469d-b6eb-0487061f2e85","Type":"ContainerDied","Data":"628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb"} Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.534015 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk4h4" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.534070 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk4h4" event={"ID":"c5fba9f2-8ed7-469d-b6eb-0487061f2e85","Type":"ContainerDied","Data":"75ebb3c37ba44fc6a308a9de7b35106a77e19501116d0bea9c756f19eae6e63b"} Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.534107 5049 scope.go:117] "RemoveContainer" containerID="628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.573358 5049 scope.go:117] "RemoveContainer" containerID="e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.598941 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tk4h4"] Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.608835 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tk4h4"] Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.613067 5049 scope.go:117] "RemoveContainer" containerID="f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.652731 5049 scope.go:117] "RemoveContainer" containerID="628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb" Jan 27 17:23:29 crc kubenswrapper[5049]: E0127 17:23:29.654890 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb\": container with ID starting with 628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb not found: ID does not exist" containerID="628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.654936 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb"} err="failed to get container status \"628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb\": rpc error: code = NotFound desc = could not find container \"628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb\": container with ID starting with 628493d55a6afd490b29013f3ae3f3adb1830fea230f7fe2853d5c69560628eb not found: ID does not exist" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.654967 5049 scope.go:117] "RemoveContainer" containerID="e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6" Jan 27 17:23:29 crc kubenswrapper[5049]: E0127 17:23:29.655555 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6\": container with ID starting with e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6 not found: ID does not exist" containerID="e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.655631 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6"} err="failed to get container status \"e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6\": rpc error: code = NotFound desc = could not find container \"e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6\": container with ID starting with e54cd3a69e38e170358f9ea10c2bdb279434f4df1a0feea17d8c8a2dcc4af5a6 not found: ID does not exist" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.655718 5049 scope.go:117] "RemoveContainer" containerID="f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3" Jan 27 17:23:29 crc kubenswrapper[5049]: E0127 17:23:29.656284 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3\": container with ID starting with f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3 not found: ID does not exist" containerID="f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.656337 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3"} err="failed to get container status \"f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3\": rpc error: code = NotFound desc = could not find container \"f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3\": container with ID starting with f06a36adb28b07e802b123266eea7c9259332035b792300797e4b9f2552119c3 not found: ID does not exist" Jan 27 17:23:29 crc kubenswrapper[5049]: I0127 17:23:29.661888 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" path="/var/lib/kubelet/pods/c5fba9f2-8ed7-469d-b6eb-0487061f2e85/volumes" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.682490 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zdnkv"] Jan 27 17:23:48 crc kubenswrapper[5049]: E0127 17:23:48.683418 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerName="extract-content" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.683433 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerName="extract-content" Jan 27 17:23:48 crc kubenswrapper[5049]: E0127 17:23:48.683452 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerName="extract-utilities" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.683463 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerName="extract-utilities" Jan 27 17:23:48 crc kubenswrapper[5049]: E0127 17:23:48.683480 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerName="registry-server" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.683488 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerName="registry-server" Jan 27 17:23:48 crc kubenswrapper[5049]: E0127 17:23:48.683501 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerName="extract-utilities" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.683509 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerName="extract-utilities" Jan 27 17:23:48 crc kubenswrapper[5049]: E0127 17:23:48.683529 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerName="extract-content" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.683536 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerName="extract-content" Jan 27 17:23:48 crc kubenswrapper[5049]: E0127 17:23:48.683548 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerName="registry-server" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.683555 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerName="registry-server" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.683752 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="67172a8b-7dd6-45cc-95db-dfbdb10d8071" containerName="registry-server" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.683770 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5fba9f2-8ed7-469d-b6eb-0487061f2e85" containerName="registry-server" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.685025 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.706582 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdnkv"] Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.809735 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-utilities\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.809796 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-catalog-content\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.809994 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm8z7\" (UniqueName: \"kubernetes.io/projected/786afe71-f298-40d6-be11-e0529e41f190-kube-api-access-rm8z7\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.911488 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm8z7\" (UniqueName: \"kubernetes.io/projected/786afe71-f298-40d6-be11-e0529e41f190-kube-api-access-rm8z7\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.911550 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-utilities\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.911573 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-catalog-content\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.912074 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-catalog-content\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.912140 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-utilities\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:48 crc kubenswrapper[5049]: I0127 17:23:48.941742 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm8z7\" (UniqueName: \"kubernetes.io/projected/786afe71-f298-40d6-be11-e0529e41f190-kube-api-access-rm8z7\") pod \"community-operators-zdnkv\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:49 crc kubenswrapper[5049]: I0127 17:23:49.046552 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:49 crc kubenswrapper[5049]: I0127 17:23:49.485726 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdnkv"] Jan 27 17:23:49 crc kubenswrapper[5049]: I0127 17:23:49.756526 5049 generic.go:334] "Generic (PLEG): container finished" podID="786afe71-f298-40d6-be11-e0529e41f190" containerID="6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae" exitCode=0 Jan 27 17:23:49 crc kubenswrapper[5049]: I0127 17:23:49.756583 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdnkv" event={"ID":"786afe71-f298-40d6-be11-e0529e41f190","Type":"ContainerDied","Data":"6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae"} Jan 27 17:23:49 crc kubenswrapper[5049]: I0127 17:23:49.756618 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdnkv" event={"ID":"786afe71-f298-40d6-be11-e0529e41f190","Type":"ContainerStarted","Data":"2a117ffe2dda1a5b3b7f9afcb510146d1fbdd47c0b8ba189b9b69b751b5bfbff"} Jan 27 17:23:50 crc kubenswrapper[5049]: I0127 17:23:50.773999 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdnkv" event={"ID":"786afe71-f298-40d6-be11-e0529e41f190","Type":"ContainerStarted","Data":"6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102"} Jan 27 17:23:52 crc kubenswrapper[5049]: I0127 17:23:52.798159 5049 generic.go:334] "Generic (PLEG): container finished" podID="786afe71-f298-40d6-be11-e0529e41f190" containerID="6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102" exitCode=0 Jan 27 17:23:52 crc kubenswrapper[5049]: I0127 17:23:52.798224 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdnkv" event={"ID":"786afe71-f298-40d6-be11-e0529e41f190","Type":"ContainerDied","Data":"6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102"} Jan 27 17:23:53 crc kubenswrapper[5049]: I0127 17:23:53.809350 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdnkv" event={"ID":"786afe71-f298-40d6-be11-e0529e41f190","Type":"ContainerStarted","Data":"773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7"} Jan 27 17:23:53 crc kubenswrapper[5049]: I0127 17:23:53.835287 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zdnkv" podStartSLOduration=2.195738512 podStartE2EDuration="5.835270083s" podCreationTimestamp="2026-01-27 17:23:48 +0000 UTC" firstStartedPulling="2026-01-27 17:23:49.758131952 +0000 UTC m=+1604.857105531" lastFinishedPulling="2026-01-27 17:23:53.397663553 +0000 UTC m=+1608.496637102" observedRunningTime="2026-01-27 17:23:53.831599488 +0000 UTC m=+1608.930573097" watchObservedRunningTime="2026-01-27 17:23:53.835270083 +0000 UTC m=+1608.934243642" Jan 27 17:23:59 crc kubenswrapper[5049]: I0127 17:23:59.047013 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:59 crc kubenswrapper[5049]: I0127 17:23:59.047509 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:59 crc kubenswrapper[5049]: I0127 17:23:59.112345 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:59 crc kubenswrapper[5049]: I0127 17:23:59.924976 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:23:59 crc kubenswrapper[5049]: I0127 17:23:59.994229 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdnkv"] Jan 27 17:24:01 crc kubenswrapper[5049]: I0127 17:24:01.882935 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zdnkv" podUID="786afe71-f298-40d6-be11-e0529e41f190" containerName="registry-server" containerID="cri-o://773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7" gracePeriod=2 Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.439305 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.519500 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-catalog-content\") pod \"786afe71-f298-40d6-be11-e0529e41f190\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.519569 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-utilities\") pod \"786afe71-f298-40d6-be11-e0529e41f190\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.519623 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm8z7\" (UniqueName: \"kubernetes.io/projected/786afe71-f298-40d6-be11-e0529e41f190-kube-api-access-rm8z7\") pod \"786afe71-f298-40d6-be11-e0529e41f190\" (UID: \"786afe71-f298-40d6-be11-e0529e41f190\") " Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.520812 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-utilities" (OuterVolumeSpecName: "utilities") pod "786afe71-f298-40d6-be11-e0529e41f190" (UID: "786afe71-f298-40d6-be11-e0529e41f190"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.526817 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786afe71-f298-40d6-be11-e0529e41f190-kube-api-access-rm8z7" (OuterVolumeSpecName: "kube-api-access-rm8z7") pod "786afe71-f298-40d6-be11-e0529e41f190" (UID: "786afe71-f298-40d6-be11-e0529e41f190"). InnerVolumeSpecName "kube-api-access-rm8z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.609799 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "786afe71-f298-40d6-be11-e0529e41f190" (UID: "786afe71-f298-40d6-be11-e0529e41f190"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.622023 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.622068 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/786afe71-f298-40d6-be11-e0529e41f190-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.622094 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm8z7\" (UniqueName: \"kubernetes.io/projected/786afe71-f298-40d6-be11-e0529e41f190-kube-api-access-rm8z7\") on node \"crc\" DevicePath \"\"" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.896765 5049 generic.go:334] "Generic (PLEG): container finished" podID="786afe71-f298-40d6-be11-e0529e41f190" containerID="773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7" exitCode=0 Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.896900 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdnkv" event={"ID":"786afe71-f298-40d6-be11-e0529e41f190","Type":"ContainerDied","Data":"773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7"} Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.896903 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdnkv" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.896986 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdnkv" event={"ID":"786afe71-f298-40d6-be11-e0529e41f190","Type":"ContainerDied","Data":"2a117ffe2dda1a5b3b7f9afcb510146d1fbdd47c0b8ba189b9b69b751b5bfbff"} Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.897029 5049 scope.go:117] "RemoveContainer" containerID="773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.930498 5049 scope.go:117] "RemoveContainer" containerID="6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102" Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.957928 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdnkv"] Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.982038 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zdnkv"] Jan 27 17:24:02 crc kubenswrapper[5049]: I0127 17:24:02.984461 5049 scope.go:117] "RemoveContainer" containerID="6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae" Jan 27 17:24:03 crc kubenswrapper[5049]: I0127 17:24:03.019224 5049 scope.go:117] "RemoveContainer" containerID="773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7" Jan 27 17:24:03 crc kubenswrapper[5049]: E0127 17:24:03.019780 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7\": container with ID starting with 773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7 not found: ID does not exist" containerID="773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7" Jan 27 17:24:03 crc kubenswrapper[5049]: I0127 17:24:03.019851 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7"} err="failed to get container status \"773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7\": rpc error: code = NotFound desc = could not find container \"773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7\": container with ID starting with 773660c183563b835c216e4c91b26a58d2cdf05cb4b73ee2afebd76a5584b3e7 not found: ID does not exist" Jan 27 17:24:03 crc kubenswrapper[5049]: I0127 17:24:03.019889 5049 scope.go:117] "RemoveContainer" containerID="6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102" Jan 27 17:24:03 crc kubenswrapper[5049]: E0127 17:24:03.020305 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102\": container with ID starting with 6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102 not found: ID does not exist" containerID="6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102" Jan 27 17:24:03 crc kubenswrapper[5049]: I0127 17:24:03.020359 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102"} err="failed to get container status \"6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102\": rpc error: code = NotFound desc = could not find container \"6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102\": container with ID starting with 6a31d84aa0a9d5f0d7064d2d41272cdcc589cf84b3a598bc0a0d40e42d980102 not found: ID does not exist" Jan 27 17:24:03 crc kubenswrapper[5049]: I0127 17:24:03.020404 5049 scope.go:117] "RemoveContainer" containerID="6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae" Jan 27 17:24:03 crc kubenswrapper[5049]: E0127 17:24:03.020894 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae\": container with ID starting with 6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae not found: ID does not exist" containerID="6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae" Jan 27 17:24:03 crc kubenswrapper[5049]: I0127 17:24:03.020939 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae"} err="failed to get container status \"6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae\": rpc error: code = NotFound desc = could not find container \"6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae\": container with ID starting with 6f6db251a1ec707998959c4234d01e91ceb5a5f4624338e2219af81734bcd0ae not found: ID does not exist" Jan 27 17:24:03 crc kubenswrapper[5049]: I0127 17:24:03.662458 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="786afe71-f298-40d6-be11-e0529e41f190" path="/var/lib/kubelet/pods/786afe71-f298-40d6-be11-e0529e41f190/volumes" Jan 27 17:24:12 crc kubenswrapper[5049]: I0127 17:24:12.890953 5049 scope.go:117] "RemoveContainer" containerID="bf4496e8b75d16d17c6412080add0938fc8b015c3af6283bf184fae15703af44" Jan 27 17:24:12 crc kubenswrapper[5049]: I0127 17:24:12.945935 5049 scope.go:117] "RemoveContainer" containerID="ae0d327447843e6e5818c34d84bfad4757042611fef3894efd400ad9be445ea2" Jan 27 17:24:12 crc kubenswrapper[5049]: I0127 17:24:12.989568 5049 scope.go:117] "RemoveContainer" containerID="f9bed1843a91672f90172fb39a72ff66021d8df792a9d70084e5a6ea6b70cdc1" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.051648 5049 scope.go:117] "RemoveContainer" containerID="bce3f8ac28bbafaaf90ec8f0010712151e873fe41ef7e673be768c9b5aef4e48" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.080095 5049 scope.go:117] "RemoveContainer" containerID="8832f0497f6317088c066f86d39c6ff6515783e0965d4828a799b9a8e0dc9357" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.132747 5049 scope.go:117] "RemoveContainer" containerID="1a5011d1ce56fb586eae0db1f125d6527f67faabd3172ec43c0043152119152b" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.182949 5049 scope.go:117] "RemoveContainer" containerID="ee5ae698dfe15cec5da501cfca88f038e751a977e00e817c10344908bab2296c" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.210963 5049 scope.go:117] "RemoveContainer" containerID="a03fcd74978f09ca045dfe9c61b9f24cf5e346044f862b8fae04cbbf4b1c4ef2" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.263518 5049 scope.go:117] "RemoveContainer" containerID="d9652b205e581e553a0c9e06258e912e875db1cccc2fc4ed7a75314d3904f38d" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.301014 5049 scope.go:117] "RemoveContainer" containerID="ce96b42959d702c7c1ddfd5a0a340e66afe6b3a0ac3d7f1366977905c48b5ef8" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.330659 5049 scope.go:117] "RemoveContainer" containerID="b2606a0b66c74e770aad6521163bf92feb7174e6534ebf3f44b0803ed90204d1" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.354711 5049 scope.go:117] "RemoveContainer" containerID="ff18291a08fce870db4a8157c08cef6cde160a9942e9acc6a215e113f67e1c1b" Jan 27 17:24:13 crc kubenswrapper[5049]: I0127 17:24:13.397870 5049 scope.go:117] "RemoveContainer" containerID="2e88a16790a77d57aa2efbb521452dac400308fe716002d7c10d178232b694c3" Jan 27 17:24:17 crc kubenswrapper[5049]: I0127 17:24:17.781154 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:24:17 crc kubenswrapper[5049]: I0127 17:24:17.781531 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:24:47 crc kubenswrapper[5049]: I0127 17:24:47.781497 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:24:47 crc kubenswrapper[5049]: I0127 17:24:47.782154 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.776480 5049 scope.go:117] "RemoveContainer" containerID="c37cefb751d64431838d0d639d77094b356c1cbc28d82c80423ed1139e6e6a83" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.796394 5049 scope.go:117] "RemoveContainer" containerID="d37d30713855c20058ddaa0bf88d078a2270dcf3c57898ea717f889e47119dd9" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.816543 5049 scope.go:117] "RemoveContainer" containerID="7ae3504deccf9251f861e1b04cad137e5cae30793e3aca67a011fc088b924d3c" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.837184 5049 scope.go:117] "RemoveContainer" containerID="cecd0a11e45fbdb4890c4d67eb2840588379ffa4729b0318a1da22165154bf43" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.865029 5049 scope.go:117] "RemoveContainer" containerID="4e44fbc63bdfe3405cc0c996f34a24eeb2b3df0dad0e5067d6936856e7cf90b2" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.912161 5049 scope.go:117] "RemoveContainer" containerID="f0557be01aeda0b262ae4132263b0d52831fe6b39db78cba0346c927862f56d4" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.927144 5049 scope.go:117] "RemoveContainer" containerID="b47f5414c1582f21b1da4088380b6b02a451e3a5445b297b8bf6ad208bb2d933" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.949656 5049 scope.go:117] "RemoveContainer" containerID="44858ca344cb02f0f890719655ebff9255be9cb7ce9987f2a0c412b281f23bd6" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.981393 5049 scope.go:117] "RemoveContainer" containerID="4d62fe3f218ab430b02c0a5feb5685b2843febf6591ab36fb4b523d784f6cfe2" Jan 27 17:25:13 crc kubenswrapper[5049]: I0127 17:25:13.995179 5049 scope.go:117] "RemoveContainer" containerID="c422dcf58a12b3e1c2bbfb5a87f0b3c14d5feabb3c8f40c663b069b0a0e59651" Jan 27 17:25:14 crc kubenswrapper[5049]: I0127 17:25:14.030771 5049 scope.go:117] "RemoveContainer" containerID="937e562346ae487fb3be55f7c8b72e630d672dcec3184ea0f4d6dfb0b5d0bebd" Jan 27 17:25:14 crc kubenswrapper[5049]: I0127 17:25:14.049458 5049 scope.go:117] "RemoveContainer" containerID="7215db154fca4c485ce5f2aa053df582e36a7f1fecb9e09ebfd15e0e765ca5ec" Jan 27 17:25:17 crc kubenswrapper[5049]: I0127 17:25:17.781602 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:25:17 crc kubenswrapper[5049]: I0127 17:25:17.781984 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:25:17 crc kubenswrapper[5049]: I0127 17:25:17.782040 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:25:17 crc kubenswrapper[5049]: I0127 17:25:17.782767 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:25:17 crc kubenswrapper[5049]: I0127 17:25:17.782906 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" gracePeriod=600 Jan 27 17:25:17 crc kubenswrapper[5049]: E0127 17:25:17.937601 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:25:18 crc kubenswrapper[5049]: I0127 17:25:18.275698 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" exitCode=0 Jan 27 17:25:18 crc kubenswrapper[5049]: I0127 17:25:18.275729 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00"} Jan 27 17:25:18 crc kubenswrapper[5049]: I0127 17:25:18.275795 5049 scope.go:117] "RemoveContainer" containerID="081b74af340b03286f3b46d2254afe32fe4625cc1e5446a6c08c340a2428ad40" Jan 27 17:25:18 crc kubenswrapper[5049]: I0127 17:25:18.276367 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:25:18 crc kubenswrapper[5049]: E0127 17:25:18.276623 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:25:33 crc kubenswrapper[5049]: I0127 17:25:33.646291 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:25:33 crc kubenswrapper[5049]: E0127 17:25:33.648337 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:25:45 crc kubenswrapper[5049]: I0127 17:25:45.652989 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:25:45 crc kubenswrapper[5049]: E0127 17:25:45.653771 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:25:56 crc kubenswrapper[5049]: I0127 17:25:56.645750 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:25:56 crc kubenswrapper[5049]: E0127 17:25:56.646556 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:26:10 crc kubenswrapper[5049]: I0127 17:26:10.646988 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:26:10 crc kubenswrapper[5049]: E0127 17:26:10.648288 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:26:14 crc kubenswrapper[5049]: I0127 17:26:14.221207 5049 scope.go:117] "RemoveContainer" containerID="563de14929d078dd19bf2cb77b128291a6d18e7e9090227946c7b7340017db70" Jan 27 17:26:14 crc kubenswrapper[5049]: I0127 17:26:14.279799 5049 scope.go:117] "RemoveContainer" containerID="87e630809570d66b82043e72d7b0c73f26f019253b501e0c1c593168656b633d" Jan 27 17:26:21 crc kubenswrapper[5049]: I0127 17:26:21.646531 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:26:21 crc kubenswrapper[5049]: E0127 17:26:21.647349 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:26:36 crc kubenswrapper[5049]: I0127 17:26:36.651176 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:26:36 crc kubenswrapper[5049]: E0127 17:26:36.653191 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:26:47 crc kubenswrapper[5049]: I0127 17:26:47.646478 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:26:47 crc kubenswrapper[5049]: E0127 17:26:47.647653 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:26:58 crc kubenswrapper[5049]: I0127 17:26:58.646469 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:26:58 crc kubenswrapper[5049]: E0127 17:26:58.647300 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:27:13 crc kubenswrapper[5049]: I0127 17:27:13.645785 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:27:13 crc kubenswrapper[5049]: E0127 17:27:13.646476 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:27:14 crc kubenswrapper[5049]: I0127 17:27:14.355431 5049 scope.go:117] "RemoveContainer" containerID="ed46c77a82d243a5da6e3709d5009665adaeef4af279b00988f34600ecd91dc7" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.644298 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2m7pb"] Jan 27 17:27:25 crc kubenswrapper[5049]: E0127 17:27:25.646649 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786afe71-f298-40d6-be11-e0529e41f190" containerName="extract-utilities" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.646697 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="786afe71-f298-40d6-be11-e0529e41f190" containerName="extract-utilities" Jan 27 17:27:25 crc kubenswrapper[5049]: E0127 17:27:25.646731 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786afe71-f298-40d6-be11-e0529e41f190" containerName="registry-server" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.646742 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="786afe71-f298-40d6-be11-e0529e41f190" containerName="registry-server" Jan 27 17:27:25 crc kubenswrapper[5049]: E0127 17:27:25.646774 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786afe71-f298-40d6-be11-e0529e41f190" containerName="extract-content" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.646785 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="786afe71-f298-40d6-be11-e0529e41f190" containerName="extract-content" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.647005 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="786afe71-f298-40d6-be11-e0529e41f190" containerName="registry-server" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.648371 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.682812 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2m7pb"] Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.781887 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82af32d9-43e8-4416-aab2-8107103cc7ff-catalog-content\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.781949 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89mrx\" (UniqueName: \"kubernetes.io/projected/82af32d9-43e8-4416-aab2-8107103cc7ff-kube-api-access-89mrx\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.781999 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82af32d9-43e8-4416-aab2-8107103cc7ff-utilities\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.882999 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82af32d9-43e8-4416-aab2-8107103cc7ff-catalog-content\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.883068 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89mrx\" (UniqueName: \"kubernetes.io/projected/82af32d9-43e8-4416-aab2-8107103cc7ff-kube-api-access-89mrx\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.883117 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82af32d9-43e8-4416-aab2-8107103cc7ff-utilities\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.883652 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82af32d9-43e8-4416-aab2-8107103cc7ff-catalog-content\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.883719 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82af32d9-43e8-4416-aab2-8107103cc7ff-utilities\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.913899 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89mrx\" (UniqueName: \"kubernetes.io/projected/82af32d9-43e8-4416-aab2-8107103cc7ff-kube-api-access-89mrx\") pod \"redhat-operators-2m7pb\" (UID: \"82af32d9-43e8-4416-aab2-8107103cc7ff\") " pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:25 crc kubenswrapper[5049]: I0127 17:27:25.991917 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:26 crc kubenswrapper[5049]: I0127 17:27:26.438402 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2m7pb"] Jan 27 17:27:26 crc kubenswrapper[5049]: I0127 17:27:26.645755 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:27:26 crc kubenswrapper[5049]: E0127 17:27:26.646076 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:27:26 crc kubenswrapper[5049]: I0127 17:27:26.802793 5049 generic.go:334] "Generic (PLEG): container finished" podID="82af32d9-43e8-4416-aab2-8107103cc7ff" containerID="b521b7c774277133d45e11c6243d46678b847fe0f09039a87cd675c2834572cc" exitCode=0 Jan 27 17:27:26 crc kubenswrapper[5049]: I0127 17:27:26.802846 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2m7pb" event={"ID":"82af32d9-43e8-4416-aab2-8107103cc7ff","Type":"ContainerDied","Data":"b521b7c774277133d45e11c6243d46678b847fe0f09039a87cd675c2834572cc"} Jan 27 17:27:26 crc kubenswrapper[5049]: I0127 17:27:26.802880 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2m7pb" event={"ID":"82af32d9-43e8-4416-aab2-8107103cc7ff","Type":"ContainerStarted","Data":"20e0977a56b2a90ec8cc52b2b1347d059b22bbeb9844a1bcb6b522c88fd32d33"} Jan 27 17:27:34 crc kubenswrapper[5049]: I0127 17:27:34.864876 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2m7pb" event={"ID":"82af32d9-43e8-4416-aab2-8107103cc7ff","Type":"ContainerStarted","Data":"ea6fcf76c510432c799fdbb588b2e20461ed0bc0e3aeb7a9c04de8a89c82106c"} Jan 27 17:27:35 crc kubenswrapper[5049]: I0127 17:27:35.878580 5049 generic.go:334] "Generic (PLEG): container finished" podID="82af32d9-43e8-4416-aab2-8107103cc7ff" containerID="ea6fcf76c510432c799fdbb588b2e20461ed0bc0e3aeb7a9c04de8a89c82106c" exitCode=0 Jan 27 17:27:35 crc kubenswrapper[5049]: I0127 17:27:35.878655 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2m7pb" event={"ID":"82af32d9-43e8-4416-aab2-8107103cc7ff","Type":"ContainerDied","Data":"ea6fcf76c510432c799fdbb588b2e20461ed0bc0e3aeb7a9c04de8a89c82106c"} Jan 27 17:27:36 crc kubenswrapper[5049]: I0127 17:27:36.904035 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2m7pb" event={"ID":"82af32d9-43e8-4416-aab2-8107103cc7ff","Type":"ContainerStarted","Data":"6c34ac1b1db71651f7d5d8161034f09d0ce12069844e4b674a001ec16ab31526"} Jan 27 17:27:36 crc kubenswrapper[5049]: I0127 17:27:36.936830 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2m7pb" podStartSLOduration=2.456014382 podStartE2EDuration="11.936806225s" podCreationTimestamp="2026-01-27 17:27:25 +0000 UTC" firstStartedPulling="2026-01-27 17:27:26.804623347 +0000 UTC m=+1821.903596906" lastFinishedPulling="2026-01-27 17:27:36.28541517 +0000 UTC m=+1831.384388749" observedRunningTime="2026-01-27 17:27:36.926629874 +0000 UTC m=+1832.025603493" watchObservedRunningTime="2026-01-27 17:27:36.936806225 +0000 UTC m=+1832.035779804" Jan 27 17:27:37 crc kubenswrapper[5049]: I0127 17:27:37.646832 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:27:37 crc kubenswrapper[5049]: E0127 17:27:37.647234 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:27:45 crc kubenswrapper[5049]: I0127 17:27:45.993059 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:45 crc kubenswrapper[5049]: I0127 17:27:45.993850 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:46 crc kubenswrapper[5049]: I0127 17:27:46.081421 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:47 crc kubenswrapper[5049]: I0127 17:27:47.069788 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2m7pb" Jan 27 17:27:47 crc kubenswrapper[5049]: I0127 17:27:47.476312 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2m7pb"] Jan 27 17:27:47 crc kubenswrapper[5049]: I0127 17:27:47.848540 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4psp"] Jan 27 17:27:47 crc kubenswrapper[5049]: I0127 17:27:47.848947 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r4psp" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerName="registry-server" containerID="cri-o://cb6641bc1d9a34af89d0a1ebf9e91185fe8e9d142d0096150ccc989cc5dcda61" gracePeriod=2 Jan 27 17:27:49 crc kubenswrapper[5049]: I0127 17:27:49.009009 5049 generic.go:334] "Generic (PLEG): container finished" podID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerID="cb6641bc1d9a34af89d0a1ebf9e91185fe8e9d142d0096150ccc989cc5dcda61" exitCode=0 Jan 27 17:27:49 crc kubenswrapper[5049]: I0127 17:27:49.009084 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4psp" event={"ID":"58dd2ae5-2dc6-4602-a286-28c5809cc910","Type":"ContainerDied","Data":"cb6641bc1d9a34af89d0a1ebf9e91185fe8e9d142d0096150ccc989cc5dcda61"} Jan 27 17:27:49 crc kubenswrapper[5049]: I0127 17:27:49.646378 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:27:49 crc kubenswrapper[5049]: E0127 17:27:49.646588 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.016237 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r4psp" event={"ID":"58dd2ae5-2dc6-4602-a286-28c5809cc910","Type":"ContainerDied","Data":"75e1e31b3a42618cc10bfd9d65b77794cd4ede379ee4135898d6b5e1fb05f301"} Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.016532 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e1e31b3a42618cc10bfd9d65b77794cd4ede379ee4135898d6b5e1fb05f301" Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.033478 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.225005 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcs6p\" (UniqueName: \"kubernetes.io/projected/58dd2ae5-2dc6-4602-a286-28c5809cc910-kube-api-access-hcs6p\") pod \"58dd2ae5-2dc6-4602-a286-28c5809cc910\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.225132 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-utilities\") pod \"58dd2ae5-2dc6-4602-a286-28c5809cc910\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.225206 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-catalog-content\") pod \"58dd2ae5-2dc6-4602-a286-28c5809cc910\" (UID: \"58dd2ae5-2dc6-4602-a286-28c5809cc910\") " Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.227949 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-utilities" (OuterVolumeSpecName: "utilities") pod "58dd2ae5-2dc6-4602-a286-28c5809cc910" (UID: "58dd2ae5-2dc6-4602-a286-28c5809cc910"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.231609 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58dd2ae5-2dc6-4602-a286-28c5809cc910-kube-api-access-hcs6p" (OuterVolumeSpecName: "kube-api-access-hcs6p") pod "58dd2ae5-2dc6-4602-a286-28c5809cc910" (UID: "58dd2ae5-2dc6-4602-a286-28c5809cc910"). InnerVolumeSpecName "kube-api-access-hcs6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.327157 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.327205 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcs6p\" (UniqueName: \"kubernetes.io/projected/58dd2ae5-2dc6-4602-a286-28c5809cc910-kube-api-access-hcs6p\") on node \"crc\" DevicePath \"\"" Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.364036 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58dd2ae5-2dc6-4602-a286-28c5809cc910" (UID: "58dd2ae5-2dc6-4602-a286-28c5809cc910"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:27:50 crc kubenswrapper[5049]: I0127 17:27:50.428960 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58dd2ae5-2dc6-4602-a286-28c5809cc910-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:27:51 crc kubenswrapper[5049]: I0127 17:27:51.022132 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r4psp" Jan 27 17:27:51 crc kubenswrapper[5049]: I0127 17:27:51.050837 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r4psp"] Jan 27 17:27:51 crc kubenswrapper[5049]: I0127 17:27:51.057103 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r4psp"] Jan 27 17:27:51 crc kubenswrapper[5049]: I0127 17:27:51.663415 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" path="/var/lib/kubelet/pods/58dd2ae5-2dc6-4602-a286-28c5809cc910/volumes" Jan 27 17:28:03 crc kubenswrapper[5049]: I0127 17:28:03.647100 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:28:03 crc kubenswrapper[5049]: E0127 17:28:03.648231 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:28:14 crc kubenswrapper[5049]: I0127 17:28:14.463943 5049 scope.go:117] "RemoveContainer" containerID="cb6641bc1d9a34af89d0a1ebf9e91185fe8e9d142d0096150ccc989cc5dcda61" Jan 27 17:28:14 crc kubenswrapper[5049]: I0127 17:28:14.483562 5049 scope.go:117] "RemoveContainer" containerID="b5982f62a9a3a063641e537e7020c7c9b5a450557cb0cf808c2ad981bb83afd7" Jan 27 17:28:14 crc kubenswrapper[5049]: I0127 17:28:14.504460 5049 scope.go:117] "RemoveContainer" containerID="a1060f20b2f2c0a6bdc445e4bb91fe41edf33f749bf966ea209534b6941bdb25" Jan 27 17:28:16 crc kubenswrapper[5049]: I0127 17:28:16.646554 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:28:16 crc kubenswrapper[5049]: E0127 17:28:16.646855 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:28:31 crc kubenswrapper[5049]: I0127 17:28:31.646884 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:28:31 crc kubenswrapper[5049]: E0127 17:28:31.648162 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:28:44 crc kubenswrapper[5049]: I0127 17:28:44.646233 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:28:44 crc kubenswrapper[5049]: E0127 17:28:44.647477 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:28:57 crc kubenswrapper[5049]: I0127 17:28:57.646863 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:28:57 crc kubenswrapper[5049]: E0127 17:28:57.648890 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:29:11 crc kubenswrapper[5049]: I0127 17:29:11.646937 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:29:11 crc kubenswrapper[5049]: E0127 17:29:11.647686 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:29:26 crc kubenswrapper[5049]: I0127 17:29:26.646277 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:29:26 crc kubenswrapper[5049]: E0127 17:29:26.647395 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:29:39 crc kubenswrapper[5049]: I0127 17:29:39.647103 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:29:39 crc kubenswrapper[5049]: E0127 17:29:39.648144 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:29:54 crc kubenswrapper[5049]: I0127 17:29:54.646594 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:29:54 crc kubenswrapper[5049]: E0127 17:29:54.647768 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.166137 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj"] Jan 27 17:30:00 crc kubenswrapper[5049]: E0127 17:30:00.166839 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerName="extract-content" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.166855 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerName="extract-content" Jan 27 17:30:00 crc kubenswrapper[5049]: E0127 17:30:00.166879 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerName="registry-server" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.166887 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerName="registry-server" Jan 27 17:30:00 crc kubenswrapper[5049]: E0127 17:30:00.166902 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerName="extract-utilities" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.166912 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerName="extract-utilities" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.167062 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="58dd2ae5-2dc6-4602-a286-28c5809cc910" containerName="registry-server" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.167590 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.170281 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.171562 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.181502 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj"] Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.270667 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pvp\" (UniqueName: \"kubernetes.io/projected/da79d64c-a115-4a32-a92d-a6f99ad18b93-kube-api-access-95pvp\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.270821 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da79d64c-a115-4a32-a92d-a6f99ad18b93-config-volume\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.270853 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da79d64c-a115-4a32-a92d-a6f99ad18b93-secret-volume\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.372481 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95pvp\" (UniqueName: \"kubernetes.io/projected/da79d64c-a115-4a32-a92d-a6f99ad18b93-kube-api-access-95pvp\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.372639 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da79d64c-a115-4a32-a92d-a6f99ad18b93-config-volume\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.372688 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da79d64c-a115-4a32-a92d-a6f99ad18b93-secret-volume\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.373935 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da79d64c-a115-4a32-a92d-a6f99ad18b93-config-volume\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.383864 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da79d64c-a115-4a32-a92d-a6f99ad18b93-secret-volume\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.393719 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95pvp\" (UniqueName: \"kubernetes.io/projected/da79d64c-a115-4a32-a92d-a6f99ad18b93-kube-api-access-95pvp\") pod \"collect-profiles-29492250-dccrj\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:00 crc kubenswrapper[5049]: I0127 17:30:00.526050 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:01 crc kubenswrapper[5049]: I0127 17:30:01.012222 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj"] Jan 27 17:30:01 crc kubenswrapper[5049]: I0127 17:30:01.224553 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" event={"ID":"da79d64c-a115-4a32-a92d-a6f99ad18b93","Type":"ContainerStarted","Data":"5aecb04c1f480a0c188f27c089e3373028b6cf81e6a54f6aec18bcc98fd1b76d"} Jan 27 17:30:01 crc kubenswrapper[5049]: I0127 17:30:01.224616 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" event={"ID":"da79d64c-a115-4a32-a92d-a6f99ad18b93","Type":"ContainerStarted","Data":"e3884925c76dfa8713ec16af356ce9773cd9bcf656958173c68b607036d1ccc3"} Jan 27 17:30:01 crc kubenswrapper[5049]: I0127 17:30:01.246198 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" podStartSLOduration=1.246181552 podStartE2EDuration="1.246181552s" podCreationTimestamp="2026-01-27 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:30:01.241896829 +0000 UTC m=+1976.340870398" watchObservedRunningTime="2026-01-27 17:30:01.246181552 +0000 UTC m=+1976.345155101" Jan 27 17:30:02 crc kubenswrapper[5049]: I0127 17:30:02.237807 5049 generic.go:334] "Generic (PLEG): container finished" podID="da79d64c-a115-4a32-a92d-a6f99ad18b93" containerID="5aecb04c1f480a0c188f27c089e3373028b6cf81e6a54f6aec18bcc98fd1b76d" exitCode=0 Jan 27 17:30:02 crc kubenswrapper[5049]: I0127 17:30:02.237934 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" event={"ID":"da79d64c-a115-4a32-a92d-a6f99ad18b93","Type":"ContainerDied","Data":"5aecb04c1f480a0c188f27c089e3373028b6cf81e6a54f6aec18bcc98fd1b76d"} Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.599289 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.724540 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95pvp\" (UniqueName: \"kubernetes.io/projected/da79d64c-a115-4a32-a92d-a6f99ad18b93-kube-api-access-95pvp\") pod \"da79d64c-a115-4a32-a92d-a6f99ad18b93\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.724641 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da79d64c-a115-4a32-a92d-a6f99ad18b93-config-volume\") pod \"da79d64c-a115-4a32-a92d-a6f99ad18b93\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.724719 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da79d64c-a115-4a32-a92d-a6f99ad18b93-secret-volume\") pod \"da79d64c-a115-4a32-a92d-a6f99ad18b93\" (UID: \"da79d64c-a115-4a32-a92d-a6f99ad18b93\") " Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.725266 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da79d64c-a115-4a32-a92d-a6f99ad18b93-config-volume" (OuterVolumeSpecName: "config-volume") pod "da79d64c-a115-4a32-a92d-a6f99ad18b93" (UID: "da79d64c-a115-4a32-a92d-a6f99ad18b93"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.729526 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da79d64c-a115-4a32-a92d-a6f99ad18b93-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da79d64c-a115-4a32-a92d-a6f99ad18b93" (UID: "da79d64c-a115-4a32-a92d-a6f99ad18b93"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.729809 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da79d64c-a115-4a32-a92d-a6f99ad18b93-kube-api-access-95pvp" (OuterVolumeSpecName: "kube-api-access-95pvp") pod "da79d64c-a115-4a32-a92d-a6f99ad18b93" (UID: "da79d64c-a115-4a32-a92d-a6f99ad18b93"). InnerVolumeSpecName "kube-api-access-95pvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.827249 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da79d64c-a115-4a32-a92d-a6f99ad18b93-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.827301 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da79d64c-a115-4a32-a92d-a6f99ad18b93-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:30:03 crc kubenswrapper[5049]: I0127 17:30:03.827321 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95pvp\" (UniqueName: \"kubernetes.io/projected/da79d64c-a115-4a32-a92d-a6f99ad18b93-kube-api-access-95pvp\") on node \"crc\" DevicePath \"\"" Jan 27 17:30:04 crc kubenswrapper[5049]: I0127 17:30:04.258453 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" event={"ID":"da79d64c-a115-4a32-a92d-a6f99ad18b93","Type":"ContainerDied","Data":"e3884925c76dfa8713ec16af356ce9773cd9bcf656958173c68b607036d1ccc3"} Jan 27 17:30:04 crc kubenswrapper[5049]: I0127 17:30:04.258493 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3884925c76dfa8713ec16af356ce9773cd9bcf656958173c68b607036d1ccc3" Jan 27 17:30:04 crc kubenswrapper[5049]: I0127 17:30:04.258541 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj" Jan 27 17:30:04 crc kubenswrapper[5049]: I0127 17:30:04.333101 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc"] Jan 27 17:30:04 crc kubenswrapper[5049]: I0127 17:30:04.340003 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492205-mdgtc"] Jan 27 17:30:05 crc kubenswrapper[5049]: I0127 17:30:05.664341 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aeb400e-8352-4de5-baf4-e64073f57d32" path="/var/lib/kubelet/pods/7aeb400e-8352-4de5-baf4-e64073f57d32/volumes" Jan 27 17:30:09 crc kubenswrapper[5049]: I0127 17:30:09.646934 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:30:09 crc kubenswrapper[5049]: E0127 17:30:09.647915 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:30:14 crc kubenswrapper[5049]: I0127 17:30:14.582230 5049 scope.go:117] "RemoveContainer" containerID="9d7a8a58384555f6e1888850d427ed817d9c32bbafb01f166a0bb5570c8c5d0a" Jan 27 17:30:20 crc kubenswrapper[5049]: I0127 17:30:20.646224 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:30:21 crc kubenswrapper[5049]: I0127 17:30:21.418805 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"151d4bb24a0fd43034dab23af7c046d8430c4f7b984bb791c865ade78b08e85a"} Jan 27 17:32:47 crc kubenswrapper[5049]: I0127 17:32:47.781095 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:32:47 crc kubenswrapper[5049]: I0127 17:32:47.781829 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:33:17 crc kubenswrapper[5049]: I0127 17:33:17.782178 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:33:17 crc kubenswrapper[5049]: I0127 17:33:17.783098 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:33:47 crc kubenswrapper[5049]: I0127 17:33:47.781637 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:33:47 crc kubenswrapper[5049]: I0127 17:33:47.782409 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:33:47 crc kubenswrapper[5049]: I0127 17:33:47.782482 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:33:47 crc kubenswrapper[5049]: I0127 17:33:47.783696 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"151d4bb24a0fd43034dab23af7c046d8430c4f7b984bb791c865ade78b08e85a"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:33:47 crc kubenswrapper[5049]: I0127 17:33:47.783826 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://151d4bb24a0fd43034dab23af7c046d8430c4f7b984bb791c865ade78b08e85a" gracePeriod=600 Jan 27 17:33:48 crc kubenswrapper[5049]: I0127 17:33:48.846569 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="151d4bb24a0fd43034dab23af7c046d8430c4f7b984bb791c865ade78b08e85a" exitCode=0 Jan 27 17:33:48 crc kubenswrapper[5049]: I0127 17:33:48.846723 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"151d4bb24a0fd43034dab23af7c046d8430c4f7b984bb791c865ade78b08e85a"} Jan 27 17:33:48 crc kubenswrapper[5049]: I0127 17:33:48.847354 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5"} Jan 27 17:33:48 crc kubenswrapper[5049]: I0127 17:33:48.847388 5049 scope.go:117] "RemoveContainer" containerID="4365ecaacd780b11645d5e6e8ac4bc7cc880b4c87d8ef04bbef67d147727bf00" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.185041 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8d55n"] Jan 27 17:33:50 crc kubenswrapper[5049]: E0127 17:33:50.186073 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da79d64c-a115-4a32-a92d-a6f99ad18b93" containerName="collect-profiles" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.186105 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="da79d64c-a115-4a32-a92d-a6f99ad18b93" containerName="collect-profiles" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.186434 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="da79d64c-a115-4a32-a92d-a6f99ad18b93" containerName="collect-profiles" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.188757 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.209263 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d55n"] Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.260445 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-utilities\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.260539 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j64h\" (UniqueName: \"kubernetes.io/projected/d0d2023e-303f-4a53-97df-71dcafd57c81-kube-api-access-5j64h\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.260768 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-catalog-content\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.362388 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-utilities\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.362477 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j64h\" (UniqueName: \"kubernetes.io/projected/d0d2023e-303f-4a53-97df-71dcafd57c81-kube-api-access-5j64h\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.362553 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-catalog-content\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.363115 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-catalog-content\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.363348 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-utilities\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.389263 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j64h\" (UniqueName: \"kubernetes.io/projected/d0d2023e-303f-4a53-97df-71dcafd57c81-kube-api-access-5j64h\") pod \"certified-operators-8d55n\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.537514 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:33:50 crc kubenswrapper[5049]: I0127 17:33:50.871116 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8d55n"] Jan 27 17:33:51 crc kubenswrapper[5049]: I0127 17:33:51.874642 5049 generic.go:334] "Generic (PLEG): container finished" podID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerID="10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd" exitCode=0 Jan 27 17:33:51 crc kubenswrapper[5049]: I0127 17:33:51.874719 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d55n" event={"ID":"d0d2023e-303f-4a53-97df-71dcafd57c81","Type":"ContainerDied","Data":"10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd"} Jan 27 17:33:51 crc kubenswrapper[5049]: I0127 17:33:51.875043 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d55n" event={"ID":"d0d2023e-303f-4a53-97df-71dcafd57c81","Type":"ContainerStarted","Data":"e3b4410ffb503116acc79d9b2ea9351cc16dab6467c9fd28716e947eed48db7c"} Jan 27 17:33:51 crc kubenswrapper[5049]: I0127 17:33:51.877105 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:33:54 crc kubenswrapper[5049]: I0127 17:33:54.902912 5049 generic.go:334] "Generic (PLEG): container finished" podID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerID="43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901" exitCode=0 Jan 27 17:33:54 crc kubenswrapper[5049]: I0127 17:33:54.903019 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d55n" event={"ID":"d0d2023e-303f-4a53-97df-71dcafd57c81","Type":"ContainerDied","Data":"43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901"} Jan 27 17:33:55 crc kubenswrapper[5049]: I0127 17:33:55.911483 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d55n" event={"ID":"d0d2023e-303f-4a53-97df-71dcafd57c81","Type":"ContainerStarted","Data":"7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee"} Jan 27 17:33:55 crc kubenswrapper[5049]: I0127 17:33:55.949357 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8d55n" podStartSLOduration=2.26852138 podStartE2EDuration="5.949331282s" podCreationTimestamp="2026-01-27 17:33:50 +0000 UTC" firstStartedPulling="2026-01-27 17:33:51.876759032 +0000 UTC m=+2206.975732601" lastFinishedPulling="2026-01-27 17:33:55.557568914 +0000 UTC m=+2210.656542503" observedRunningTime="2026-01-27 17:33:55.94431303 +0000 UTC m=+2211.043286589" watchObservedRunningTime="2026-01-27 17:33:55.949331282 +0000 UTC m=+2211.048304861" Jan 27 17:34:00 crc kubenswrapper[5049]: I0127 17:34:00.538612 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:34:00 crc kubenswrapper[5049]: I0127 17:34:00.539485 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:34:00 crc kubenswrapper[5049]: I0127 17:34:00.614420 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:34:01 crc kubenswrapper[5049]: I0127 17:34:01.029784 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:34:01 crc kubenswrapper[5049]: I0127 17:34:01.096870 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8d55n"] Jan 27 17:34:02 crc kubenswrapper[5049]: I0127 17:34:02.981329 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8d55n" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerName="registry-server" containerID="cri-o://7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee" gracePeriod=2 Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.447795 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.463545 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j64h\" (UniqueName: \"kubernetes.io/projected/d0d2023e-303f-4a53-97df-71dcafd57c81-kube-api-access-5j64h\") pod \"d0d2023e-303f-4a53-97df-71dcafd57c81\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.463850 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-utilities\") pod \"d0d2023e-303f-4a53-97df-71dcafd57c81\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.463896 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-catalog-content\") pod \"d0d2023e-303f-4a53-97df-71dcafd57c81\" (UID: \"d0d2023e-303f-4a53-97df-71dcafd57c81\") " Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.465585 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-utilities" (OuterVolumeSpecName: "utilities") pod "d0d2023e-303f-4a53-97df-71dcafd57c81" (UID: "d0d2023e-303f-4a53-97df-71dcafd57c81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.470446 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d2023e-303f-4a53-97df-71dcafd57c81-kube-api-access-5j64h" (OuterVolumeSpecName: "kube-api-access-5j64h") pod "d0d2023e-303f-4a53-97df-71dcafd57c81" (UID: "d0d2023e-303f-4a53-97df-71dcafd57c81"). InnerVolumeSpecName "kube-api-access-5j64h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.529911 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0d2023e-303f-4a53-97df-71dcafd57c81" (UID: "d0d2023e-303f-4a53-97df-71dcafd57c81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.566272 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.566528 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0d2023e-303f-4a53-97df-71dcafd57c81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.566608 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j64h\" (UniqueName: \"kubernetes.io/projected/d0d2023e-303f-4a53-97df-71dcafd57c81-kube-api-access-5j64h\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.993442 5049 generic.go:334] "Generic (PLEG): container finished" podID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerID="7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee" exitCode=0 Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.993484 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d55n" event={"ID":"d0d2023e-303f-4a53-97df-71dcafd57c81","Type":"ContainerDied","Data":"7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee"} Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.993513 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8d55n" event={"ID":"d0d2023e-303f-4a53-97df-71dcafd57c81","Type":"ContainerDied","Data":"e3b4410ffb503116acc79d9b2ea9351cc16dab6467c9fd28716e947eed48db7c"} Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.993532 5049 scope.go:117] "RemoveContainer" containerID="7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee" Jan 27 17:34:03 crc kubenswrapper[5049]: I0127 17:34:03.993587 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8d55n" Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.031504 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8d55n"] Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.033367 5049 scope.go:117] "RemoveContainer" containerID="43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901" Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.041661 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8d55n"] Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.063062 5049 scope.go:117] "RemoveContainer" containerID="10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd" Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.098281 5049 scope.go:117] "RemoveContainer" containerID="7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee" Jan 27 17:34:04 crc kubenswrapper[5049]: E0127 17:34:04.100882 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee\": container with ID starting with 7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee not found: ID does not exist" containerID="7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee" Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.101050 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee"} err="failed to get container status \"7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee\": rpc error: code = NotFound desc = could not find container \"7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee\": container with ID starting with 7b5deab69af07274ad76e0d917655ecc586b67e22a84b56f3dc2e38b67a64aee not found: ID does not exist" Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.101109 5049 scope.go:117] "RemoveContainer" containerID="43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901" Jan 27 17:34:04 crc kubenswrapper[5049]: E0127 17:34:04.101745 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901\": container with ID starting with 43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901 not found: ID does not exist" containerID="43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901" Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.101794 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901"} err="failed to get container status \"43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901\": rpc error: code = NotFound desc = could not find container \"43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901\": container with ID starting with 43574bae4eb9be64cd419dbcdd1c36e56674aebdc65a1a83100bbb798e65e901 not found: ID does not exist" Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.101822 5049 scope.go:117] "RemoveContainer" containerID="10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd" Jan 27 17:34:04 crc kubenswrapper[5049]: E0127 17:34:04.102295 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd\": container with ID starting with 10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd not found: ID does not exist" containerID="10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd" Jan 27 17:34:04 crc kubenswrapper[5049]: I0127 17:34:04.102354 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd"} err="failed to get container status \"10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd\": rpc error: code = NotFound desc = could not find container \"10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd\": container with ID starting with 10ce24c507accb0ed73614fd436e0c91e2f6e09ee95fb4aa79dd990e3989eecd not found: ID does not exist" Jan 27 17:34:05 crc kubenswrapper[5049]: I0127 17:34:05.666919 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" path="/var/lib/kubelet/pods/d0d2023e-303f-4a53-97df-71dcafd57c81/volumes" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.523140 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p26fb"] Jan 27 17:34:31 crc kubenswrapper[5049]: E0127 17:34:31.524051 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerName="registry-server" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.524070 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerName="registry-server" Jan 27 17:34:31 crc kubenswrapper[5049]: E0127 17:34:31.524086 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerName="extract-content" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.524094 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerName="extract-content" Jan 27 17:34:31 crc kubenswrapper[5049]: E0127 17:34:31.524113 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerName="extract-utilities" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.524122 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerName="extract-utilities" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.524288 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d2023e-303f-4a53-97df-71dcafd57c81" containerName="registry-server" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.525638 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.535974 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p26fb"] Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.544642 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-utilities\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.544797 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-catalog-content\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.544874 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx6kw\" (UniqueName: \"kubernetes.io/projected/7c08210e-ef21-4935-be0d-ad7084208641-kube-api-access-nx6kw\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.645427 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-catalog-content\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.645487 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx6kw\" (UniqueName: \"kubernetes.io/projected/7c08210e-ef21-4935-be0d-ad7084208641-kube-api-access-nx6kw\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.645547 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-utilities\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.646032 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-utilities\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.646299 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-catalog-content\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.663568 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx6kw\" (UniqueName: \"kubernetes.io/projected/7c08210e-ef21-4935-be0d-ad7084208641-kube-api-access-nx6kw\") pod \"community-operators-p26fb\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:31 crc kubenswrapper[5049]: I0127 17:34:31.846552 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:32 crc kubenswrapper[5049]: I0127 17:34:32.298595 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p26fb"] Jan 27 17:34:33 crc kubenswrapper[5049]: I0127 17:34:33.280747 5049 generic.go:334] "Generic (PLEG): container finished" podID="7c08210e-ef21-4935-be0d-ad7084208641" containerID="c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842" exitCode=0 Jan 27 17:34:33 crc kubenswrapper[5049]: I0127 17:34:33.280914 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26fb" event={"ID":"7c08210e-ef21-4935-be0d-ad7084208641","Type":"ContainerDied","Data":"c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842"} Jan 27 17:34:33 crc kubenswrapper[5049]: I0127 17:34:33.281327 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26fb" event={"ID":"7c08210e-ef21-4935-be0d-ad7084208641","Type":"ContainerStarted","Data":"d99a837e755b17855fa2b341a8c99ac3ea3ff35be6683481e6e5c6463aaacff6"} Jan 27 17:34:35 crc kubenswrapper[5049]: I0127 17:34:35.306640 5049 generic.go:334] "Generic (PLEG): container finished" podID="7c08210e-ef21-4935-be0d-ad7084208641" containerID="b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f" exitCode=0 Jan 27 17:34:35 crc kubenswrapper[5049]: I0127 17:34:35.306824 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26fb" event={"ID":"7c08210e-ef21-4935-be0d-ad7084208641","Type":"ContainerDied","Data":"b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f"} Jan 27 17:34:36 crc kubenswrapper[5049]: I0127 17:34:36.321152 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26fb" event={"ID":"7c08210e-ef21-4935-be0d-ad7084208641","Type":"ContainerStarted","Data":"534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3"} Jan 27 17:34:36 crc kubenswrapper[5049]: I0127 17:34:36.354031 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p26fb" podStartSLOduration=2.643851076 podStartE2EDuration="5.354000092s" podCreationTimestamp="2026-01-27 17:34:31 +0000 UTC" firstStartedPulling="2026-01-27 17:34:33.287288439 +0000 UTC m=+2248.386262028" lastFinishedPulling="2026-01-27 17:34:35.997437465 +0000 UTC m=+2251.096411044" observedRunningTime="2026-01-27 17:34:36.348809096 +0000 UTC m=+2251.447782725" watchObservedRunningTime="2026-01-27 17:34:36.354000092 +0000 UTC m=+2251.452973691" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.146517 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bnlq2"] Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.149104 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.157800 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnlq2"] Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.258171 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-catalog-content\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.258409 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-utilities\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.258520 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wglzr\" (UniqueName: \"kubernetes.io/projected/f5974e85-8688-41c1-98ef-583eb9a22b80-kube-api-access-wglzr\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.360221 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-utilities\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.360349 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wglzr\" (UniqueName: \"kubernetes.io/projected/f5974e85-8688-41c1-98ef-583eb9a22b80-kube-api-access-wglzr\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.360393 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-catalog-content\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.360882 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-utilities\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.361034 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-catalog-content\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.394230 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wglzr\" (UniqueName: \"kubernetes.io/projected/f5974e85-8688-41c1-98ef-583eb9a22b80-kube-api-access-wglzr\") pod \"redhat-marketplace-bnlq2\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.468357 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:38 crc kubenswrapper[5049]: I0127 17:34:38.746573 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnlq2"] Jan 27 17:34:38 crc kubenswrapper[5049]: W0127 17:34:38.752469 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5974e85_8688_41c1_98ef_583eb9a22b80.slice/crio-f1ca71227e1aeb67b14fa37aaeaf4be17530df9150289f04b49a7cacbc74584f WatchSource:0}: Error finding container f1ca71227e1aeb67b14fa37aaeaf4be17530df9150289f04b49a7cacbc74584f: Status 404 returned error can't find the container with id f1ca71227e1aeb67b14fa37aaeaf4be17530df9150289f04b49a7cacbc74584f Jan 27 17:34:39 crc kubenswrapper[5049]: I0127 17:34:39.349463 5049 generic.go:334] "Generic (PLEG): container finished" podID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerID="a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c" exitCode=0 Jan 27 17:34:39 crc kubenswrapper[5049]: I0127 17:34:39.349522 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnlq2" event={"ID":"f5974e85-8688-41c1-98ef-583eb9a22b80","Type":"ContainerDied","Data":"a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c"} Jan 27 17:34:39 crc kubenswrapper[5049]: I0127 17:34:39.349561 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnlq2" event={"ID":"f5974e85-8688-41c1-98ef-583eb9a22b80","Type":"ContainerStarted","Data":"f1ca71227e1aeb67b14fa37aaeaf4be17530df9150289f04b49a7cacbc74584f"} Jan 27 17:34:41 crc kubenswrapper[5049]: I0127 17:34:41.382410 5049 generic.go:334] "Generic (PLEG): container finished" podID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerID="ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10" exitCode=0 Jan 27 17:34:41 crc kubenswrapper[5049]: I0127 17:34:41.382549 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnlq2" event={"ID":"f5974e85-8688-41c1-98ef-583eb9a22b80","Type":"ContainerDied","Data":"ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10"} Jan 27 17:34:41 crc kubenswrapper[5049]: I0127 17:34:41.847599 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:41 crc kubenswrapper[5049]: I0127 17:34:41.847647 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:41 crc kubenswrapper[5049]: I0127 17:34:41.924853 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:42 crc kubenswrapper[5049]: I0127 17:34:42.411227 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnlq2" event={"ID":"f5974e85-8688-41c1-98ef-583eb9a22b80","Type":"ContainerStarted","Data":"ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891"} Jan 27 17:34:42 crc kubenswrapper[5049]: I0127 17:34:42.442610 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bnlq2" podStartSLOduration=1.734275334 podStartE2EDuration="4.442588468s" podCreationTimestamp="2026-01-27 17:34:38 +0000 UTC" firstStartedPulling="2026-01-27 17:34:39.351852889 +0000 UTC m=+2254.450826438" lastFinishedPulling="2026-01-27 17:34:42.060166023 +0000 UTC m=+2257.159139572" observedRunningTime="2026-01-27 17:34:42.438259646 +0000 UTC m=+2257.537233195" watchObservedRunningTime="2026-01-27 17:34:42.442588468 +0000 UTC m=+2257.541562017" Jan 27 17:34:42 crc kubenswrapper[5049]: I0127 17:34:42.491842 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:43 crc kubenswrapper[5049]: I0127 17:34:43.698780 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p26fb"] Jan 27 17:34:44 crc kubenswrapper[5049]: I0127 17:34:44.428225 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p26fb" podUID="7c08210e-ef21-4935-be0d-ad7084208641" containerName="registry-server" containerID="cri-o://534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3" gracePeriod=2 Jan 27 17:34:44 crc kubenswrapper[5049]: I0127 17:34:44.911440 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:44 crc kubenswrapper[5049]: I0127 17:34:44.961851 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-utilities\") pod \"7c08210e-ef21-4935-be0d-ad7084208641\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " Jan 27 17:34:44 crc kubenswrapper[5049]: I0127 17:34:44.961951 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-catalog-content\") pod \"7c08210e-ef21-4935-be0d-ad7084208641\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " Jan 27 17:34:44 crc kubenswrapper[5049]: I0127 17:34:44.962000 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx6kw\" (UniqueName: \"kubernetes.io/projected/7c08210e-ef21-4935-be0d-ad7084208641-kube-api-access-nx6kw\") pod \"7c08210e-ef21-4935-be0d-ad7084208641\" (UID: \"7c08210e-ef21-4935-be0d-ad7084208641\") " Jan 27 17:34:44 crc kubenswrapper[5049]: I0127 17:34:44.963736 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-utilities" (OuterVolumeSpecName: "utilities") pod "7c08210e-ef21-4935-be0d-ad7084208641" (UID: "7c08210e-ef21-4935-be0d-ad7084208641"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:34:44 crc kubenswrapper[5049]: I0127 17:34:44.974200 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c08210e-ef21-4935-be0d-ad7084208641-kube-api-access-nx6kw" (OuterVolumeSpecName: "kube-api-access-nx6kw") pod "7c08210e-ef21-4935-be0d-ad7084208641" (UID: "7c08210e-ef21-4935-be0d-ad7084208641"). InnerVolumeSpecName "kube-api-access-nx6kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.056472 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c08210e-ef21-4935-be0d-ad7084208641" (UID: "7c08210e-ef21-4935-be0d-ad7084208641"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.064252 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.064403 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx6kw\" (UniqueName: \"kubernetes.io/projected/7c08210e-ef21-4935-be0d-ad7084208641-kube-api-access-nx6kw\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.064492 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c08210e-ef21-4935-be0d-ad7084208641-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.442733 5049 generic.go:334] "Generic (PLEG): container finished" podID="7c08210e-ef21-4935-be0d-ad7084208641" containerID="534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3" exitCode=0 Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.442837 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26fb" event={"ID":"7c08210e-ef21-4935-be0d-ad7084208641","Type":"ContainerDied","Data":"534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3"} Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.443263 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p26fb" event={"ID":"7c08210e-ef21-4935-be0d-ad7084208641","Type":"ContainerDied","Data":"d99a837e755b17855fa2b341a8c99ac3ea3ff35be6683481e6e5c6463aaacff6"} Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.442875 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p26fb" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.443311 5049 scope.go:117] "RemoveContainer" containerID="534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.480508 5049 scope.go:117] "RemoveContainer" containerID="b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.508849 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p26fb"] Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.520869 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p26fb"] Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.534793 5049 scope.go:117] "RemoveContainer" containerID="c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.557615 5049 scope.go:117] "RemoveContainer" containerID="534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3" Jan 27 17:34:45 crc kubenswrapper[5049]: E0127 17:34:45.558362 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3\": container with ID starting with 534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3 not found: ID does not exist" containerID="534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.558532 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3"} err="failed to get container status \"534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3\": rpc error: code = NotFound desc = could not find container \"534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3\": container with ID starting with 534cf5946ea30415d0bce47a266778d114dec0529471e9b555d54aad424231a3 not found: ID does not exist" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.558916 5049 scope.go:117] "RemoveContainer" containerID="b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f" Jan 27 17:34:45 crc kubenswrapper[5049]: E0127 17:34:45.559618 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f\": container with ID starting with b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f not found: ID does not exist" containerID="b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.559811 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f"} err="failed to get container status \"b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f\": rpc error: code = NotFound desc = could not find container \"b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f\": container with ID starting with b5a52eac6bcf09e2d1d9f56db013d7dc49a707ab6af6df2a894b8cbf625a956f not found: ID does not exist" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.559931 5049 scope.go:117] "RemoveContainer" containerID="c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842" Jan 27 17:34:45 crc kubenswrapper[5049]: E0127 17:34:45.560416 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842\": container with ID starting with c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842 not found: ID does not exist" containerID="c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.560452 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842"} err="failed to get container status \"c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842\": rpc error: code = NotFound desc = could not find container \"c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842\": container with ID starting with c9501466aa195a013bf50f313ba4622cbde42c2a794d18eaba7388212fb83842 not found: ID does not exist" Jan 27 17:34:45 crc kubenswrapper[5049]: I0127 17:34:45.668522 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c08210e-ef21-4935-be0d-ad7084208641" path="/var/lib/kubelet/pods/7c08210e-ef21-4935-be0d-ad7084208641/volumes" Jan 27 17:34:48 crc kubenswrapper[5049]: I0127 17:34:48.469502 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:48 crc kubenswrapper[5049]: I0127 17:34:48.469882 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:48 crc kubenswrapper[5049]: I0127 17:34:48.546547 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:49 crc kubenswrapper[5049]: I0127 17:34:49.555570 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:49 crc kubenswrapper[5049]: I0127 17:34:49.633146 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnlq2"] Jan 27 17:34:51 crc kubenswrapper[5049]: I0127 17:34:51.501893 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bnlq2" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerName="registry-server" containerID="cri-o://ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891" gracePeriod=2 Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.009804 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.077310 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-utilities\") pod \"f5974e85-8688-41c1-98ef-583eb9a22b80\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.077448 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-catalog-content\") pod \"f5974e85-8688-41c1-98ef-583eb9a22b80\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.077520 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wglzr\" (UniqueName: \"kubernetes.io/projected/f5974e85-8688-41c1-98ef-583eb9a22b80-kube-api-access-wglzr\") pod \"f5974e85-8688-41c1-98ef-583eb9a22b80\" (UID: \"f5974e85-8688-41c1-98ef-583eb9a22b80\") " Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.078777 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-utilities" (OuterVolumeSpecName: "utilities") pod "f5974e85-8688-41c1-98ef-583eb9a22b80" (UID: "f5974e85-8688-41c1-98ef-583eb9a22b80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.085735 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5974e85-8688-41c1-98ef-583eb9a22b80-kube-api-access-wglzr" (OuterVolumeSpecName: "kube-api-access-wglzr") pod "f5974e85-8688-41c1-98ef-583eb9a22b80" (UID: "f5974e85-8688-41c1-98ef-583eb9a22b80"). InnerVolumeSpecName "kube-api-access-wglzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.106078 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5974e85-8688-41c1-98ef-583eb9a22b80" (UID: "f5974e85-8688-41c1-98ef-583eb9a22b80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.179649 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wglzr\" (UniqueName: \"kubernetes.io/projected/f5974e85-8688-41c1-98ef-583eb9a22b80-kube-api-access-wglzr\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.179714 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.179730 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5974e85-8688-41c1-98ef-583eb9a22b80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.514833 5049 generic.go:334] "Generic (PLEG): container finished" podID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerID="ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891" exitCode=0 Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.514939 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bnlq2" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.514964 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnlq2" event={"ID":"f5974e85-8688-41c1-98ef-583eb9a22b80","Type":"ContainerDied","Data":"ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891"} Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.515630 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bnlq2" event={"ID":"f5974e85-8688-41c1-98ef-583eb9a22b80","Type":"ContainerDied","Data":"f1ca71227e1aeb67b14fa37aaeaf4be17530df9150289f04b49a7cacbc74584f"} Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.515664 5049 scope.go:117] "RemoveContainer" containerID="ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.565237 5049 scope.go:117] "RemoveContainer" containerID="ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.575870 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnlq2"] Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.585826 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bnlq2"] Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.611950 5049 scope.go:117] "RemoveContainer" containerID="a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.710545 5049 scope.go:117] "RemoveContainer" containerID="ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891" Jan 27 17:34:52 crc kubenswrapper[5049]: E0127 17:34:52.711231 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891\": container with ID starting with ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891 not found: ID does not exist" containerID="ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.711281 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891"} err="failed to get container status \"ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891\": rpc error: code = NotFound desc = could not find container \"ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891\": container with ID starting with ba847e6740274d1d664f5bc14056c43144ce4bfec48aa79a9d40c4a95d7cf891 not found: ID does not exist" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.711313 5049 scope.go:117] "RemoveContainer" containerID="ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10" Jan 27 17:34:52 crc kubenswrapper[5049]: E0127 17:34:52.711814 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10\": container with ID starting with ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10 not found: ID does not exist" containerID="ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.711858 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10"} err="failed to get container status \"ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10\": rpc error: code = NotFound desc = could not find container \"ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10\": container with ID starting with ed48e044abe761e4281ceb7a43037a73e5bd8b1a021dd91c546e16485f34af10 not found: ID does not exist" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.711889 5049 scope.go:117] "RemoveContainer" containerID="a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c" Jan 27 17:34:52 crc kubenswrapper[5049]: E0127 17:34:52.712343 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c\": container with ID starting with a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c not found: ID does not exist" containerID="a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c" Jan 27 17:34:52 crc kubenswrapper[5049]: I0127 17:34:52.712381 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c"} err="failed to get container status \"a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c\": rpc error: code = NotFound desc = could not find container \"a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c\": container with ID starting with a271df7e41ed775d7e12f0b3892c83a96e16214d06104ec6eb2652c5b960eb2c not found: ID does not exist" Jan 27 17:34:53 crc kubenswrapper[5049]: I0127 17:34:53.661843 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" path="/var/lib/kubelet/pods/f5974e85-8688-41c1-98ef-583eb9a22b80/volumes" Jan 27 17:36:17 crc kubenswrapper[5049]: I0127 17:36:17.781852 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:36:17 crc kubenswrapper[5049]: I0127 17:36:17.782408 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:36:47 crc kubenswrapper[5049]: I0127 17:36:47.782297 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:36:47 crc kubenswrapper[5049]: I0127 17:36:47.782787 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:37:17 crc kubenswrapper[5049]: I0127 17:37:17.781269 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:37:17 crc kubenswrapper[5049]: I0127 17:37:17.782037 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:37:17 crc kubenswrapper[5049]: I0127 17:37:17.782119 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:37:17 crc kubenswrapper[5049]: I0127 17:37:17.782958 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:37:17 crc kubenswrapper[5049]: I0127 17:37:17.783048 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" gracePeriod=600 Jan 27 17:37:17 crc kubenswrapper[5049]: E0127 17:37:17.920040 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:37:18 crc kubenswrapper[5049]: I0127 17:37:18.836350 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" exitCode=0 Jan 27 17:37:18 crc kubenswrapper[5049]: I0127 17:37:18.836485 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5"} Jan 27 17:37:18 crc kubenswrapper[5049]: I0127 17:37:18.836832 5049 scope.go:117] "RemoveContainer" containerID="151d4bb24a0fd43034dab23af7c046d8430c4f7b984bb791c865ade78b08e85a" Jan 27 17:37:18 crc kubenswrapper[5049]: I0127 17:37:18.837791 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:37:18 crc kubenswrapper[5049]: E0127 17:37:18.838197 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:37:33 crc kubenswrapper[5049]: I0127 17:37:33.646590 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:37:33 crc kubenswrapper[5049]: E0127 17:37:33.647762 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:37:46 crc kubenswrapper[5049]: I0127 17:37:46.646262 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:37:46 crc kubenswrapper[5049]: E0127 17:37:46.647429 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:37:59 crc kubenswrapper[5049]: I0127 17:37:59.646483 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:37:59 crc kubenswrapper[5049]: E0127 17:37:59.647453 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:38:10 crc kubenswrapper[5049]: I0127 17:38:10.646446 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:38:10 crc kubenswrapper[5049]: E0127 17:38:10.647400 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:38:24 crc kubenswrapper[5049]: I0127 17:38:24.646566 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:38:24 crc kubenswrapper[5049]: E0127 17:38:24.647828 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:38:37 crc kubenswrapper[5049]: I0127 17:38:37.647719 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:38:37 crc kubenswrapper[5049]: E0127 17:38:37.648962 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:38:51 crc kubenswrapper[5049]: I0127 17:38:51.648297 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:38:51 crc kubenswrapper[5049]: E0127 17:38:51.649323 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:39:06 crc kubenswrapper[5049]: I0127 17:39:06.647349 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:39:06 crc kubenswrapper[5049]: E0127 17:39:06.648478 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:39:19 crc kubenswrapper[5049]: I0127 17:39:19.645893 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:39:19 crc kubenswrapper[5049]: E0127 17:39:19.647014 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:39:32 crc kubenswrapper[5049]: I0127 17:39:32.645918 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:39:32 crc kubenswrapper[5049]: E0127 17:39:32.647094 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:39:45 crc kubenswrapper[5049]: I0127 17:39:45.654744 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:39:45 crc kubenswrapper[5049]: E0127 17:39:45.655850 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.572361 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-skgpm"] Jan 27 17:39:55 crc kubenswrapper[5049]: E0127 17:39:55.573273 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerName="extract-content" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.573287 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerName="extract-content" Jan 27 17:39:55 crc kubenswrapper[5049]: E0127 17:39:55.573301 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerName="extract-utilities" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.573308 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerName="extract-utilities" Jan 27 17:39:55 crc kubenswrapper[5049]: E0127 17:39:55.573327 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerName="registry-server" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.573334 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerName="registry-server" Jan 27 17:39:55 crc kubenswrapper[5049]: E0127 17:39:55.573346 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c08210e-ef21-4935-be0d-ad7084208641" containerName="extract-content" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.573352 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c08210e-ef21-4935-be0d-ad7084208641" containerName="extract-content" Jan 27 17:39:55 crc kubenswrapper[5049]: E0127 17:39:55.573361 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c08210e-ef21-4935-be0d-ad7084208641" containerName="registry-server" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.573366 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c08210e-ef21-4935-be0d-ad7084208641" containerName="registry-server" Jan 27 17:39:55 crc kubenswrapper[5049]: E0127 17:39:55.573378 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c08210e-ef21-4935-be0d-ad7084208641" containerName="extract-utilities" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.573383 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c08210e-ef21-4935-be0d-ad7084208641" containerName="extract-utilities" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.573505 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c08210e-ef21-4935-be0d-ad7084208641" containerName="registry-server" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.573521 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5974e85-8688-41c1-98ef-583eb9a22b80" containerName="registry-server" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.574505 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.588130 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skgpm"] Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.714443 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vz4x\" (UniqueName: \"kubernetes.io/projected/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-kube-api-access-2vz4x\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.714820 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-catalog-content\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.714889 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-utilities\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.816069 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vz4x\" (UniqueName: \"kubernetes.io/projected/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-kube-api-access-2vz4x\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.816167 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-catalog-content\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.816220 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-utilities\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.817024 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-utilities\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.817129 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-catalog-content\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.839330 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vz4x\" (UniqueName: \"kubernetes.io/projected/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-kube-api-access-2vz4x\") pod \"redhat-operators-skgpm\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:55 crc kubenswrapper[5049]: I0127 17:39:55.939279 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:39:56 crc kubenswrapper[5049]: I0127 17:39:56.389822 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skgpm"] Jan 27 17:39:56 crc kubenswrapper[5049]: I0127 17:39:56.646372 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:39:56 crc kubenswrapper[5049]: E0127 17:39:56.646786 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:39:57 crc kubenswrapper[5049]: I0127 17:39:57.373424 5049 generic.go:334] "Generic (PLEG): container finished" podID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerID="99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4" exitCode=0 Jan 27 17:39:57 crc kubenswrapper[5049]: I0127 17:39:57.373521 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skgpm" event={"ID":"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb","Type":"ContainerDied","Data":"99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4"} Jan 27 17:39:57 crc kubenswrapper[5049]: I0127 17:39:57.373773 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skgpm" event={"ID":"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb","Type":"ContainerStarted","Data":"acadc650297d56bb05968bac29c00e9987ff7c14a1e14e99c2e951d3e928b99d"} Jan 27 17:39:57 crc kubenswrapper[5049]: I0127 17:39:57.376555 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:39:58 crc kubenswrapper[5049]: I0127 17:39:58.386310 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skgpm" event={"ID":"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb","Type":"ContainerStarted","Data":"fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6"} Jan 27 17:39:59 crc kubenswrapper[5049]: I0127 17:39:59.395949 5049 generic.go:334] "Generic (PLEG): container finished" podID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerID="fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6" exitCode=0 Jan 27 17:39:59 crc kubenswrapper[5049]: I0127 17:39:59.395995 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skgpm" event={"ID":"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb","Type":"ContainerDied","Data":"fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6"} Jan 27 17:40:00 crc kubenswrapper[5049]: I0127 17:40:00.409275 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skgpm" event={"ID":"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb","Type":"ContainerStarted","Data":"375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970"} Jan 27 17:40:05 crc kubenswrapper[5049]: I0127 17:40:05.939714 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:40:05 crc kubenswrapper[5049]: I0127 17:40:05.940280 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:40:06 crc kubenswrapper[5049]: I0127 17:40:06.995144 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skgpm" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="registry-server" probeResult="failure" output=< Jan 27 17:40:06 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 17:40:06 crc kubenswrapper[5049]: > Jan 27 17:40:07 crc kubenswrapper[5049]: I0127 17:40:07.646642 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:40:07 crc kubenswrapper[5049]: E0127 17:40:07.647541 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:40:16 crc kubenswrapper[5049]: I0127 17:40:16.017031 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:40:16 crc kubenswrapper[5049]: I0127 17:40:16.051317 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-skgpm" podStartSLOduration=18.627212801 podStartE2EDuration="21.051291568s" podCreationTimestamp="2026-01-27 17:39:55 +0000 UTC" firstStartedPulling="2026-01-27 17:39:57.376274605 +0000 UTC m=+2572.475248164" lastFinishedPulling="2026-01-27 17:39:59.800353382 +0000 UTC m=+2574.899326931" observedRunningTime="2026-01-27 17:40:00.437022968 +0000 UTC m=+2575.535996547" watchObservedRunningTime="2026-01-27 17:40:16.051291568 +0000 UTC m=+2591.150265157" Jan 27 17:40:16 crc kubenswrapper[5049]: I0127 17:40:16.092174 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:40:17 crc kubenswrapper[5049]: I0127 17:40:17.384848 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skgpm"] Jan 27 17:40:17 crc kubenswrapper[5049]: I0127 17:40:17.558158 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-skgpm" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="registry-server" containerID="cri-o://375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970" gracePeriod=2 Jan 27 17:40:17 crc kubenswrapper[5049]: I0127 17:40:17.981422 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.075148 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vz4x\" (UniqueName: \"kubernetes.io/projected/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-kube-api-access-2vz4x\") pod \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.075200 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-catalog-content\") pod \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.075264 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-utilities\") pod \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\" (UID: \"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb\") " Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.076279 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-utilities" (OuterVolumeSpecName: "utilities") pod "a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" (UID: "a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.082993 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-kube-api-access-2vz4x" (OuterVolumeSpecName: "kube-api-access-2vz4x") pod "a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" (UID: "a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb"). InnerVolumeSpecName "kube-api-access-2vz4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.176963 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vz4x\" (UniqueName: \"kubernetes.io/projected/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-kube-api-access-2vz4x\") on node \"crc\" DevicePath \"\"" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.177020 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.259101 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" (UID: "a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.278530 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.574649 5049 generic.go:334] "Generic (PLEG): container finished" podID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerID="375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970" exitCode=0 Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.574856 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skgpm" event={"ID":"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb","Type":"ContainerDied","Data":"375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970"} Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.574916 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skgpm" event={"ID":"a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb","Type":"ContainerDied","Data":"acadc650297d56bb05968bac29c00e9987ff7c14a1e14e99c2e951d3e928b99d"} Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.574958 5049 scope.go:117] "RemoveContainer" containerID="375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.575062 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skgpm" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.620895 5049 scope.go:117] "RemoveContainer" containerID="fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.622053 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skgpm"] Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.634263 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-skgpm"] Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.655111 5049 scope.go:117] "RemoveContainer" containerID="99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.690539 5049 scope.go:117] "RemoveContainer" containerID="375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970" Jan 27 17:40:18 crc kubenswrapper[5049]: E0127 17:40:18.691088 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970\": container with ID starting with 375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970 not found: ID does not exist" containerID="375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.691119 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970"} err="failed to get container status \"375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970\": rpc error: code = NotFound desc = could not find container \"375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970\": container with ID starting with 375725583ecd80b59ca1db5f00a086423304236da552e80b1cc391db17ad9970 not found: ID does not exist" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.691138 5049 scope.go:117] "RemoveContainer" containerID="fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6" Jan 27 17:40:18 crc kubenswrapper[5049]: E0127 17:40:18.691560 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6\": container with ID starting with fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6 not found: ID does not exist" containerID="fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.691605 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6"} err="failed to get container status \"fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6\": rpc error: code = NotFound desc = could not find container \"fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6\": container with ID starting with fd64a20ac9306ecf489dc160b6dfaef096decab181b352d730a972866fcd12e6 not found: ID does not exist" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.693011 5049 scope.go:117] "RemoveContainer" containerID="99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4" Jan 27 17:40:18 crc kubenswrapper[5049]: E0127 17:40:18.693510 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4\": container with ID starting with 99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4 not found: ID does not exist" containerID="99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4" Jan 27 17:40:18 crc kubenswrapper[5049]: I0127 17:40:18.693535 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4"} err="failed to get container status \"99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4\": rpc error: code = NotFound desc = could not find container \"99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4\": container with ID starting with 99dbe7eef4652717d12b58bf0c070617d3db135e257fa6923799fd001f4e14a4 not found: ID does not exist" Jan 27 17:40:19 crc kubenswrapper[5049]: I0127 17:40:19.663424 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" path="/var/lib/kubelet/pods/a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb/volumes" Jan 27 17:40:21 crc kubenswrapper[5049]: I0127 17:40:21.646184 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:40:21 crc kubenswrapper[5049]: E0127 17:40:21.646727 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:40:33 crc kubenswrapper[5049]: I0127 17:40:33.647039 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:40:33 crc kubenswrapper[5049]: E0127 17:40:33.648010 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:40:48 crc kubenswrapper[5049]: I0127 17:40:48.645550 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:40:48 crc kubenswrapper[5049]: E0127 17:40:48.646338 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:40:59 crc kubenswrapper[5049]: I0127 17:40:59.649634 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:40:59 crc kubenswrapper[5049]: E0127 17:40:59.657592 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:41:12 crc kubenswrapper[5049]: I0127 17:41:12.646741 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:41:12 crc kubenswrapper[5049]: E0127 17:41:12.647714 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:41:24 crc kubenswrapper[5049]: I0127 17:41:24.646248 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:41:24 crc kubenswrapper[5049]: E0127 17:41:24.647001 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:41:36 crc kubenswrapper[5049]: I0127 17:41:36.646440 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:41:36 crc kubenswrapper[5049]: E0127 17:41:36.647538 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:41:51 crc kubenswrapper[5049]: I0127 17:41:51.646437 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:41:51 crc kubenswrapper[5049]: E0127 17:41:51.647377 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:42:06 crc kubenswrapper[5049]: I0127 17:42:06.646308 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:42:06 crc kubenswrapper[5049]: E0127 17:42:06.648062 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:42:18 crc kubenswrapper[5049]: I0127 17:42:18.646572 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:42:19 crc kubenswrapper[5049]: I0127 17:42:19.614515 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"82a206bc083b77e2da8141673ab7495d9eea3b1a439feb7ffc5dc004dcb1865c"} Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.753599 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ntpts"] Jan 27 17:44:35 crc kubenswrapper[5049]: E0127 17:44:35.754664 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="extract-utilities" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.754747 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="extract-utilities" Jan 27 17:44:35 crc kubenswrapper[5049]: E0127 17:44:35.754767 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="extract-content" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.754777 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="extract-content" Jan 27 17:44:35 crc kubenswrapper[5049]: E0127 17:44:35.754800 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="registry-server" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.754811 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="registry-server" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.755092 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fc449f-99c8-4fc9-aaec-fa6c385e3dbb" containerName="registry-server" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.757884 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.768068 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntpts"] Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.771622 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzh67\" (UniqueName: \"kubernetes.io/projected/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-kube-api-access-xzh67\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.771732 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-catalog-content\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.771807 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-utilities\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.872800 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzh67\" (UniqueName: \"kubernetes.io/projected/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-kube-api-access-xzh67\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.872864 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-catalog-content\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.872904 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-utilities\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.873519 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-utilities\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.874308 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-catalog-content\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.916347 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzh67\" (UniqueName: \"kubernetes.io/projected/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-kube-api-access-xzh67\") pod \"certified-operators-ntpts\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.922058 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cxmqf"] Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.924419 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.940725 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxmqf"] Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.974146 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqsmn\" (UniqueName: \"kubernetes.io/projected/4f2d38ba-b563-4654-89fe-980e52d11fcd-kube-api-access-nqsmn\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.974253 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-catalog-content\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:35 crc kubenswrapper[5049]: I0127 17:44:35.974391 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-utilities\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.075577 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqsmn\" (UniqueName: \"kubernetes.io/projected/4f2d38ba-b563-4654-89fe-980e52d11fcd-kube-api-access-nqsmn\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.075719 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-catalog-content\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.075837 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-utilities\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.076394 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-catalog-content\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.076613 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-utilities\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.094460 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.098010 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqsmn\" (UniqueName: \"kubernetes.io/projected/4f2d38ba-b563-4654-89fe-980e52d11fcd-kube-api-access-nqsmn\") pod \"community-operators-cxmqf\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.276719 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.378104 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntpts"] Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.706106 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cxmqf"] Jan 27 17:44:36 crc kubenswrapper[5049]: W0127 17:44:36.709707 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f2d38ba_b563_4654_89fe_980e52d11fcd.slice/crio-fe491ccd569ddf4d9b80ea6eb3ecba66ec0bc962b2af6b9c9dab0b97644305d8 WatchSource:0}: Error finding container fe491ccd569ddf4d9b80ea6eb3ecba66ec0bc962b2af6b9c9dab0b97644305d8: Status 404 returned error can't find the container with id fe491ccd569ddf4d9b80ea6eb3ecba66ec0bc962b2af6b9c9dab0b97644305d8 Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.765273 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxmqf" event={"ID":"4f2d38ba-b563-4654-89fe-980e52d11fcd","Type":"ContainerStarted","Data":"fe491ccd569ddf4d9b80ea6eb3ecba66ec0bc962b2af6b9c9dab0b97644305d8"} Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.772141 5049 generic.go:334] "Generic (PLEG): container finished" podID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerID="6a249f1d851bdf58672ddc4a1a51869c20cdff3407e4085f51c4c49af3617e3f" exitCode=0 Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.772193 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpts" event={"ID":"eb52bc7d-c48d-4ca1-85c4-2b527799d95c","Type":"ContainerDied","Data":"6a249f1d851bdf58672ddc4a1a51869c20cdff3407e4085f51c4c49af3617e3f"} Jan 27 17:44:36 crc kubenswrapper[5049]: I0127 17:44:36.772228 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpts" event={"ID":"eb52bc7d-c48d-4ca1-85c4-2b527799d95c","Type":"ContainerStarted","Data":"2656182716e7d09f3ec2318072b9941865b9662ad6a37dbc0efb11b3f19b8783"} Jan 27 17:44:37 crc kubenswrapper[5049]: I0127 17:44:37.781388 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpts" event={"ID":"eb52bc7d-c48d-4ca1-85c4-2b527799d95c","Type":"ContainerStarted","Data":"b34f7cd4c29210617f4bf7ea9571722857345ea2ba615b2dafb63037d13223a2"} Jan 27 17:44:37 crc kubenswrapper[5049]: I0127 17:44:37.785546 5049 generic.go:334] "Generic (PLEG): container finished" podID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerID="da295c72f0f559e40f53e0a2271af034724bc27e44200550539d441ed0fb4cc8" exitCode=0 Jan 27 17:44:37 crc kubenswrapper[5049]: I0127 17:44:37.785589 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxmqf" event={"ID":"4f2d38ba-b563-4654-89fe-980e52d11fcd","Type":"ContainerDied","Data":"da295c72f0f559e40f53e0a2271af034724bc27e44200550539d441ed0fb4cc8"} Jan 27 17:44:38 crc kubenswrapper[5049]: I0127 17:44:38.796494 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxmqf" event={"ID":"4f2d38ba-b563-4654-89fe-980e52d11fcd","Type":"ContainerStarted","Data":"1c7bb3fdd81b344a595d4d1914cb5134c35790819df4d6f40c8a803145712183"} Jan 27 17:44:38 crc kubenswrapper[5049]: I0127 17:44:38.800541 5049 generic.go:334] "Generic (PLEG): container finished" podID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerID="b34f7cd4c29210617f4bf7ea9571722857345ea2ba615b2dafb63037d13223a2" exitCode=0 Jan 27 17:44:38 crc kubenswrapper[5049]: I0127 17:44:38.800612 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpts" event={"ID":"eb52bc7d-c48d-4ca1-85c4-2b527799d95c","Type":"ContainerDied","Data":"b34f7cd4c29210617f4bf7ea9571722857345ea2ba615b2dafb63037d13223a2"} Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.314148 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x48m7"] Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.317051 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.324373 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x48m7"] Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.430551 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgfl5\" (UniqueName: \"kubernetes.io/projected/40193211-33ab-4d21-ba3e-cd853303f876-kube-api-access-hgfl5\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.431221 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-utilities\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.431568 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-catalog-content\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.533071 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgfl5\" (UniqueName: \"kubernetes.io/projected/40193211-33ab-4d21-ba3e-cd853303f876-kube-api-access-hgfl5\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.533558 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-utilities\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.533728 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-catalog-content\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.534106 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-utilities\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.534154 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-catalog-content\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.562849 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgfl5\" (UniqueName: \"kubernetes.io/projected/40193211-33ab-4d21-ba3e-cd853303f876-kube-api-access-hgfl5\") pod \"redhat-marketplace-x48m7\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.637759 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.835952 5049 generic.go:334] "Generic (PLEG): container finished" podID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerID="1c7bb3fdd81b344a595d4d1914cb5134c35790819df4d6f40c8a803145712183" exitCode=0 Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.836321 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxmqf" event={"ID":"4f2d38ba-b563-4654-89fe-980e52d11fcd","Type":"ContainerDied","Data":"1c7bb3fdd81b344a595d4d1914cb5134c35790819df4d6f40c8a803145712183"} Jan 27 17:44:39 crc kubenswrapper[5049]: I0127 17:44:39.884200 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x48m7"] Jan 27 17:44:39 crc kubenswrapper[5049]: W0127 17:44:39.896927 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40193211_33ab_4d21_ba3e_cd853303f876.slice/crio-bbe002545b156c8166382d229d8cfadbfb1892fd3b530060abfdfe8574ba548a WatchSource:0}: Error finding container bbe002545b156c8166382d229d8cfadbfb1892fd3b530060abfdfe8574ba548a: Status 404 returned error can't find the container with id bbe002545b156c8166382d229d8cfadbfb1892fd3b530060abfdfe8574ba548a Jan 27 17:44:40 crc kubenswrapper[5049]: I0127 17:44:40.847190 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpts" event={"ID":"eb52bc7d-c48d-4ca1-85c4-2b527799d95c","Type":"ContainerStarted","Data":"6a795e75956bbca6694a7b198f0198686350b9eaf6ca25adfafbf0e5801bee3e"} Jan 27 17:44:40 crc kubenswrapper[5049]: I0127 17:44:40.850848 5049 generic.go:334] "Generic (PLEG): container finished" podID="40193211-33ab-4d21-ba3e-cd853303f876" containerID="d0fba2ed2a182a397d07c01839d5aaaea7d4163202bf33b86e7848dbab8169e6" exitCode=0 Jan 27 17:44:40 crc kubenswrapper[5049]: I0127 17:44:40.850939 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x48m7" event={"ID":"40193211-33ab-4d21-ba3e-cd853303f876","Type":"ContainerDied","Data":"d0fba2ed2a182a397d07c01839d5aaaea7d4163202bf33b86e7848dbab8169e6"} Jan 27 17:44:40 crc kubenswrapper[5049]: I0127 17:44:40.850971 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x48m7" event={"ID":"40193211-33ab-4d21-ba3e-cd853303f876","Type":"ContainerStarted","Data":"bbe002545b156c8166382d229d8cfadbfb1892fd3b530060abfdfe8574ba548a"} Jan 27 17:44:40 crc kubenswrapper[5049]: I0127 17:44:40.855019 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxmqf" event={"ID":"4f2d38ba-b563-4654-89fe-980e52d11fcd","Type":"ContainerStarted","Data":"137d560d6264118670f95d0c6492b8d027ea4947e4443033781b358868f0b70c"} Jan 27 17:44:40 crc kubenswrapper[5049]: I0127 17:44:40.875432 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ntpts" podStartSLOduration=3.046258832 podStartE2EDuration="5.875412232s" podCreationTimestamp="2026-01-27 17:44:35 +0000 UTC" firstStartedPulling="2026-01-27 17:44:36.774307111 +0000 UTC m=+2851.873280660" lastFinishedPulling="2026-01-27 17:44:39.603460521 +0000 UTC m=+2854.702434060" observedRunningTime="2026-01-27 17:44:40.866339848 +0000 UTC m=+2855.965313417" watchObservedRunningTime="2026-01-27 17:44:40.875412232 +0000 UTC m=+2855.974385801" Jan 27 17:44:40 crc kubenswrapper[5049]: I0127 17:44:40.893418 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cxmqf" podStartSLOduration=3.238616049 podStartE2EDuration="5.893400936s" podCreationTimestamp="2026-01-27 17:44:35 +0000 UTC" firstStartedPulling="2026-01-27 17:44:37.788665581 +0000 UTC m=+2852.887639170" lastFinishedPulling="2026-01-27 17:44:40.443450468 +0000 UTC m=+2855.542424057" observedRunningTime="2026-01-27 17:44:40.888526059 +0000 UTC m=+2855.987499608" watchObservedRunningTime="2026-01-27 17:44:40.893400936 +0000 UTC m=+2855.992374495" Jan 27 17:44:41 crc kubenswrapper[5049]: I0127 17:44:41.863024 5049 generic.go:334] "Generic (PLEG): container finished" podID="40193211-33ab-4d21-ba3e-cd853303f876" containerID="9f5235831553247ba285369a3a4e3e27e80a9175ac2d73114ad7113109c466c9" exitCode=0 Jan 27 17:44:41 crc kubenswrapper[5049]: I0127 17:44:41.863138 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x48m7" event={"ID":"40193211-33ab-4d21-ba3e-cd853303f876","Type":"ContainerDied","Data":"9f5235831553247ba285369a3a4e3e27e80a9175ac2d73114ad7113109c466c9"} Jan 27 17:44:42 crc kubenswrapper[5049]: I0127 17:44:42.876767 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x48m7" event={"ID":"40193211-33ab-4d21-ba3e-cd853303f876","Type":"ContainerStarted","Data":"96fb373bdd3b33c496c9978e633c390b0795082d5e92b63d5ff277d0ab97e3e8"} Jan 27 17:44:42 crc kubenswrapper[5049]: I0127 17:44:42.904775 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x48m7" podStartSLOduration=2.19428302 podStartE2EDuration="3.904749079s" podCreationTimestamp="2026-01-27 17:44:39 +0000 UTC" firstStartedPulling="2026-01-27 17:44:40.852897412 +0000 UTC m=+2855.951870961" lastFinishedPulling="2026-01-27 17:44:42.563363461 +0000 UTC m=+2857.662337020" observedRunningTime="2026-01-27 17:44:42.896321363 +0000 UTC m=+2857.995294922" watchObservedRunningTime="2026-01-27 17:44:42.904749079 +0000 UTC m=+2858.003722668" Jan 27 17:44:46 crc kubenswrapper[5049]: I0127 17:44:46.094550 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:46 crc kubenswrapper[5049]: I0127 17:44:46.095177 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:46 crc kubenswrapper[5049]: I0127 17:44:46.143885 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:46 crc kubenswrapper[5049]: I0127 17:44:46.278381 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:46 crc kubenswrapper[5049]: I0127 17:44:46.278462 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:46 crc kubenswrapper[5049]: I0127 17:44:46.342968 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:46 crc kubenswrapper[5049]: I0127 17:44:46.974365 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:46 crc kubenswrapper[5049]: I0127 17:44:46.993120 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:47 crc kubenswrapper[5049]: I0127 17:44:47.781324 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:44:47 crc kubenswrapper[5049]: I0127 17:44:47.781428 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:44:48 crc kubenswrapper[5049]: I0127 17:44:48.909125 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntpts"] Jan 27 17:44:48 crc kubenswrapper[5049]: I0127 17:44:48.931022 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ntpts" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerName="registry-server" containerID="cri-o://6a795e75956bbca6694a7b198f0198686350b9eaf6ca25adfafbf0e5801bee3e" gracePeriod=2 Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.510788 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxmqf"] Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.511106 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cxmqf" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerName="registry-server" containerID="cri-o://137d560d6264118670f95d0c6492b8d027ea4947e4443033781b358868f0b70c" gracePeriod=2 Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.638840 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.638906 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.713292 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.944443 5049 generic.go:334] "Generic (PLEG): container finished" podID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerID="137d560d6264118670f95d0c6492b8d027ea4947e4443033781b358868f0b70c" exitCode=0 Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.944533 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxmqf" event={"ID":"4f2d38ba-b563-4654-89fe-980e52d11fcd","Type":"ContainerDied","Data":"137d560d6264118670f95d0c6492b8d027ea4947e4443033781b358868f0b70c"} Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.947404 5049 generic.go:334] "Generic (PLEG): container finished" podID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerID="6a795e75956bbca6694a7b198f0198686350b9eaf6ca25adfafbf0e5801bee3e" exitCode=0 Jan 27 17:44:49 crc kubenswrapper[5049]: I0127 17:44:49.947469 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpts" event={"ID":"eb52bc7d-c48d-4ca1-85c4-2b527799d95c","Type":"ContainerDied","Data":"6a795e75956bbca6694a7b198f0198686350b9eaf6ca25adfafbf0e5801bee3e"} Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.001061 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.712520 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.719866 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.816200 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqsmn\" (UniqueName: \"kubernetes.io/projected/4f2d38ba-b563-4654-89fe-980e52d11fcd-kube-api-access-nqsmn\") pod \"4f2d38ba-b563-4654-89fe-980e52d11fcd\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.816280 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-catalog-content\") pod \"4f2d38ba-b563-4654-89fe-980e52d11fcd\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.816416 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-utilities\") pod \"4f2d38ba-b563-4654-89fe-980e52d11fcd\" (UID: \"4f2d38ba-b563-4654-89fe-980e52d11fcd\") " Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.817240 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-utilities" (OuterVolumeSpecName: "utilities") pod "4f2d38ba-b563-4654-89fe-980e52d11fcd" (UID: "4f2d38ba-b563-4654-89fe-980e52d11fcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.823948 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2d38ba-b563-4654-89fe-980e52d11fcd-kube-api-access-nqsmn" (OuterVolumeSpecName: "kube-api-access-nqsmn") pod "4f2d38ba-b563-4654-89fe-980e52d11fcd" (UID: "4f2d38ba-b563-4654-89fe-980e52d11fcd"). InnerVolumeSpecName "kube-api-access-nqsmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.868230 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f2d38ba-b563-4654-89fe-980e52d11fcd" (UID: "4f2d38ba-b563-4654-89fe-980e52d11fcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.917475 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzh67\" (UniqueName: \"kubernetes.io/projected/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-kube-api-access-xzh67\") pod \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.917705 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-utilities\") pod \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.917822 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-catalog-content\") pod \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\" (UID: \"eb52bc7d-c48d-4ca1-85c4-2b527799d95c\") " Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.918153 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.918182 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqsmn\" (UniqueName: \"kubernetes.io/projected/4f2d38ba-b563-4654-89fe-980e52d11fcd-kube-api-access-nqsmn\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.918200 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f2d38ba-b563-4654-89fe-980e52d11fcd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.918726 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-utilities" (OuterVolumeSpecName: "utilities") pod "eb52bc7d-c48d-4ca1-85c4-2b527799d95c" (UID: "eb52bc7d-c48d-4ca1-85c4-2b527799d95c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.920993 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-kube-api-access-xzh67" (OuterVolumeSpecName: "kube-api-access-xzh67") pod "eb52bc7d-c48d-4ca1-85c4-2b527799d95c" (UID: "eb52bc7d-c48d-4ca1-85c4-2b527799d95c"). InnerVolumeSpecName "kube-api-access-xzh67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.957243 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cxmqf" event={"ID":"4f2d38ba-b563-4654-89fe-980e52d11fcd","Type":"ContainerDied","Data":"fe491ccd569ddf4d9b80ea6eb3ecba66ec0bc962b2af6b9c9dab0b97644305d8"} Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.957318 5049 scope.go:117] "RemoveContainer" containerID="137d560d6264118670f95d0c6492b8d027ea4947e4443033781b358868f0b70c" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.957640 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cxmqf" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.961059 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntpts" event={"ID":"eb52bc7d-c48d-4ca1-85c4-2b527799d95c","Type":"ContainerDied","Data":"2656182716e7d09f3ec2318072b9941865b9662ad6a37dbc0efb11b3f19b8783"} Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.961177 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntpts" Jan 27 17:44:50 crc kubenswrapper[5049]: I0127 17:44:50.992251 5049 scope.go:117] "RemoveContainer" containerID="1c7bb3fdd81b344a595d4d1914cb5134c35790819df4d6f40c8a803145712183" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.000095 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cxmqf"] Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.004623 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb52bc7d-c48d-4ca1-85c4-2b527799d95c" (UID: "eb52bc7d-c48d-4ca1-85c4-2b527799d95c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.008539 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cxmqf"] Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.017984 5049 scope.go:117] "RemoveContainer" containerID="da295c72f0f559e40f53e0a2271af034724bc27e44200550539d441ed0fb4cc8" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.019094 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.019123 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.019139 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzh67\" (UniqueName: \"kubernetes.io/projected/eb52bc7d-c48d-4ca1-85c4-2b527799d95c-kube-api-access-xzh67\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.041447 5049 scope.go:117] "RemoveContainer" containerID="6a795e75956bbca6694a7b198f0198686350b9eaf6ca25adfafbf0e5801bee3e" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.060572 5049 scope.go:117] "RemoveContainer" containerID="b34f7cd4c29210617f4bf7ea9571722857345ea2ba615b2dafb63037d13223a2" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.078935 5049 scope.go:117] "RemoveContainer" containerID="6a249f1d851bdf58672ddc4a1a51869c20cdff3407e4085f51c4c49af3617e3f" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.303178 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntpts"] Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.313101 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ntpts"] Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.320027 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x48m7"] Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.659973 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" path="/var/lib/kubelet/pods/4f2d38ba-b563-4654-89fe-980e52d11fcd/volumes" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.661282 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" path="/var/lib/kubelet/pods/eb52bc7d-c48d-4ca1-85c4-2b527799d95c/volumes" Jan 27 17:44:51 crc kubenswrapper[5049]: I0127 17:44:51.973356 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x48m7" podUID="40193211-33ab-4d21-ba3e-cd853303f876" containerName="registry-server" containerID="cri-o://96fb373bdd3b33c496c9978e633c390b0795082d5e92b63d5ff277d0ab97e3e8" gracePeriod=2 Jan 27 17:44:52 crc kubenswrapper[5049]: I0127 17:44:52.989917 5049 generic.go:334] "Generic (PLEG): container finished" podID="40193211-33ab-4d21-ba3e-cd853303f876" containerID="96fb373bdd3b33c496c9978e633c390b0795082d5e92b63d5ff277d0ab97e3e8" exitCode=0 Jan 27 17:44:52 crc kubenswrapper[5049]: I0127 17:44:52.989982 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x48m7" event={"ID":"40193211-33ab-4d21-ba3e-cd853303f876","Type":"ContainerDied","Data":"96fb373bdd3b33c496c9978e633c390b0795082d5e92b63d5ff277d0ab97e3e8"} Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.202271 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.358904 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-catalog-content\") pod \"40193211-33ab-4d21-ba3e-cd853303f876\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.359034 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgfl5\" (UniqueName: \"kubernetes.io/projected/40193211-33ab-4d21-ba3e-cd853303f876-kube-api-access-hgfl5\") pod \"40193211-33ab-4d21-ba3e-cd853303f876\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.359162 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-utilities\") pod \"40193211-33ab-4d21-ba3e-cd853303f876\" (UID: \"40193211-33ab-4d21-ba3e-cd853303f876\") " Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.360759 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-utilities" (OuterVolumeSpecName: "utilities") pod "40193211-33ab-4d21-ba3e-cd853303f876" (UID: "40193211-33ab-4d21-ba3e-cd853303f876"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.367551 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40193211-33ab-4d21-ba3e-cd853303f876-kube-api-access-hgfl5" (OuterVolumeSpecName: "kube-api-access-hgfl5") pod "40193211-33ab-4d21-ba3e-cd853303f876" (UID: "40193211-33ab-4d21-ba3e-cd853303f876"). InnerVolumeSpecName "kube-api-access-hgfl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.408653 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40193211-33ab-4d21-ba3e-cd853303f876" (UID: "40193211-33ab-4d21-ba3e-cd853303f876"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.462006 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.462064 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40193211-33ab-4d21-ba3e-cd853303f876-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:53 crc kubenswrapper[5049]: I0127 17:44:53.462090 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgfl5\" (UniqueName: \"kubernetes.io/projected/40193211-33ab-4d21-ba3e-cd853303f876-kube-api-access-hgfl5\") on node \"crc\" DevicePath \"\"" Jan 27 17:44:54 crc kubenswrapper[5049]: I0127 17:44:54.003103 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x48m7" event={"ID":"40193211-33ab-4d21-ba3e-cd853303f876","Type":"ContainerDied","Data":"bbe002545b156c8166382d229d8cfadbfb1892fd3b530060abfdfe8574ba548a"} Jan 27 17:44:54 crc kubenswrapper[5049]: I0127 17:44:54.003184 5049 scope.go:117] "RemoveContainer" containerID="96fb373bdd3b33c496c9978e633c390b0795082d5e92b63d5ff277d0ab97e3e8" Jan 27 17:44:54 crc kubenswrapper[5049]: I0127 17:44:54.003400 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x48m7" Jan 27 17:44:54 crc kubenswrapper[5049]: I0127 17:44:54.044664 5049 scope.go:117] "RemoveContainer" containerID="9f5235831553247ba285369a3a4e3e27e80a9175ac2d73114ad7113109c466c9" Jan 27 17:44:54 crc kubenswrapper[5049]: I0127 17:44:54.048706 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x48m7"] Jan 27 17:44:54 crc kubenswrapper[5049]: I0127 17:44:54.069359 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x48m7"] Jan 27 17:44:54 crc kubenswrapper[5049]: I0127 17:44:54.080124 5049 scope.go:117] "RemoveContainer" containerID="d0fba2ed2a182a397d07c01839d5aaaea7d4163202bf33b86e7848dbab8169e6" Jan 27 17:44:55 crc kubenswrapper[5049]: I0127 17:44:55.663926 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40193211-33ab-4d21-ba3e-cd853303f876" path="/var/lib/kubelet/pods/40193211-33ab-4d21-ba3e-cd853303f876/volumes" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.158280 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk"] Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.158880 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40193211-33ab-4d21-ba3e-cd853303f876" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.158894 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="40193211-33ab-4d21-ba3e-cd853303f876" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.158916 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40193211-33ab-4d21-ba3e-cd853303f876" containerName="extract-content" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.158924 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="40193211-33ab-4d21-ba3e-cd853303f876" containerName="extract-content" Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.158939 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerName="extract-content" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.158947 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerName="extract-content" Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.158976 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerName="extract-content" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.158984 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerName="extract-content" Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.158997 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerName="extract-utilities" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159005 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerName="extract-utilities" Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.159019 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerName="extract-utilities" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159027 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerName="extract-utilities" Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.159040 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40193211-33ab-4d21-ba3e-cd853303f876" containerName="extract-utilities" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159048 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="40193211-33ab-4d21-ba3e-cd853303f876" containerName="extract-utilities" Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.159061 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159069 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: E0127 17:45:00.159081 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159088 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159265 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb52bc7d-c48d-4ca1-85c4-2b527799d95c" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159280 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2d38ba-b563-4654-89fe-980e52d11fcd" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159302 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="40193211-33ab-4d21-ba3e-cd853303f876" containerName="registry-server" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.159839 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.161782 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.163910 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.179051 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk"] Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.183294 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5af233b-c094-44b3-bcee-89cd3f34d4b9-config-volume\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.183424 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5af233b-c094-44b3-bcee-89cd3f34d4b9-secret-volume\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.183537 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npxck\" (UniqueName: \"kubernetes.io/projected/e5af233b-c094-44b3-bcee-89cd3f34d4b9-kube-api-access-npxck\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.284526 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npxck\" (UniqueName: \"kubernetes.io/projected/e5af233b-c094-44b3-bcee-89cd3f34d4b9-kube-api-access-npxck\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.284646 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5af233b-c094-44b3-bcee-89cd3f34d4b9-config-volume\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.284727 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5af233b-c094-44b3-bcee-89cd3f34d4b9-secret-volume\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.286832 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5af233b-c094-44b3-bcee-89cd3f34d4b9-config-volume\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.296842 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5af233b-c094-44b3-bcee-89cd3f34d4b9-secret-volume\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.306126 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npxck\" (UniqueName: \"kubernetes.io/projected/e5af233b-c094-44b3-bcee-89cd3f34d4b9-kube-api-access-npxck\") pod \"collect-profiles-29492265-4lvxk\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.496930 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:00 crc kubenswrapper[5049]: I0127 17:45:00.779597 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk"] Jan 27 17:45:00 crc kubenswrapper[5049]: W0127 17:45:00.780194 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5af233b_c094_44b3_bcee_89cd3f34d4b9.slice/crio-f4f968241027ac42ca5decf087f8e38d1bf27a80ea5f2bfe1f61507196bf649b WatchSource:0}: Error finding container f4f968241027ac42ca5decf087f8e38d1bf27a80ea5f2bfe1f61507196bf649b: Status 404 returned error can't find the container with id f4f968241027ac42ca5decf087f8e38d1bf27a80ea5f2bfe1f61507196bf649b Jan 27 17:45:01 crc kubenswrapper[5049]: I0127 17:45:01.092866 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" event={"ID":"e5af233b-c094-44b3-bcee-89cd3f34d4b9","Type":"ContainerStarted","Data":"b191a98e41e8bbe9dfc6c6ef5f0f2b2fd8f26966762c999c85267f147d960fdf"} Jan 27 17:45:01 crc kubenswrapper[5049]: I0127 17:45:01.093237 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" event={"ID":"e5af233b-c094-44b3-bcee-89cd3f34d4b9","Type":"ContainerStarted","Data":"f4f968241027ac42ca5decf087f8e38d1bf27a80ea5f2bfe1f61507196bf649b"} Jan 27 17:45:01 crc kubenswrapper[5049]: I0127 17:45:01.116701 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" podStartSLOduration=1.116656495 podStartE2EDuration="1.116656495s" podCreationTimestamp="2026-01-27 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 17:45:01.11113007 +0000 UTC m=+2876.210103629" watchObservedRunningTime="2026-01-27 17:45:01.116656495 +0000 UTC m=+2876.215630054" Jan 27 17:45:02 crc kubenswrapper[5049]: I0127 17:45:02.107225 5049 generic.go:334] "Generic (PLEG): container finished" podID="e5af233b-c094-44b3-bcee-89cd3f34d4b9" containerID="b191a98e41e8bbe9dfc6c6ef5f0f2b2fd8f26966762c999c85267f147d960fdf" exitCode=0 Jan 27 17:45:02 crc kubenswrapper[5049]: I0127 17:45:02.107378 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" event={"ID":"e5af233b-c094-44b3-bcee-89cd3f34d4b9","Type":"ContainerDied","Data":"b191a98e41e8bbe9dfc6c6ef5f0f2b2fd8f26966762c999c85267f147d960fdf"} Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.456206 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.536084 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5af233b-c094-44b3-bcee-89cd3f34d4b9-config-volume\") pod \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.536159 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5af233b-c094-44b3-bcee-89cd3f34d4b9-secret-volume\") pod \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.536271 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npxck\" (UniqueName: \"kubernetes.io/projected/e5af233b-c094-44b3-bcee-89cd3f34d4b9-kube-api-access-npxck\") pod \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\" (UID: \"e5af233b-c094-44b3-bcee-89cd3f34d4b9\") " Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.536731 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5af233b-c094-44b3-bcee-89cd3f34d4b9-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5af233b-c094-44b3-bcee-89cd3f34d4b9" (UID: "e5af233b-c094-44b3-bcee-89cd3f34d4b9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.548028 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5af233b-c094-44b3-bcee-89cd3f34d4b9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5af233b-c094-44b3-bcee-89cd3f34d4b9" (UID: "e5af233b-c094-44b3-bcee-89cd3f34d4b9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.548078 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5af233b-c094-44b3-bcee-89cd3f34d4b9-kube-api-access-npxck" (OuterVolumeSpecName: "kube-api-access-npxck") pod "e5af233b-c094-44b3-bcee-89cd3f34d4b9" (UID: "e5af233b-c094-44b3-bcee-89cd3f34d4b9"). InnerVolumeSpecName "kube-api-access-npxck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.638403 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npxck\" (UniqueName: \"kubernetes.io/projected/e5af233b-c094-44b3-bcee-89cd3f34d4b9-kube-api-access-npxck\") on node \"crc\" DevicePath \"\"" Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.638815 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5af233b-c094-44b3-bcee-89cd3f34d4b9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:45:03 crc kubenswrapper[5049]: I0127 17:45:03.638829 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5af233b-c094-44b3-bcee-89cd3f34d4b9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 17:45:04 crc kubenswrapper[5049]: I0127 17:45:04.130614 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" event={"ID":"e5af233b-c094-44b3-bcee-89cd3f34d4b9","Type":"ContainerDied","Data":"f4f968241027ac42ca5decf087f8e38d1bf27a80ea5f2bfe1f61507196bf649b"} Jan 27 17:45:04 crc kubenswrapper[5049]: I0127 17:45:04.130734 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4f968241027ac42ca5decf087f8e38d1bf27a80ea5f2bfe1f61507196bf649b" Jan 27 17:45:04 crc kubenswrapper[5049]: I0127 17:45:04.130896 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk" Jan 27 17:45:04 crc kubenswrapper[5049]: I0127 17:45:04.558416 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm"] Jan 27 17:45:04 crc kubenswrapper[5049]: I0127 17:45:04.571565 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492220-d55cm"] Jan 27 17:45:05 crc kubenswrapper[5049]: I0127 17:45:05.671464 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb" path="/var/lib/kubelet/pods/eb72ffc1-c49f-4ad0-bafa-5d6a4b0d86fb/volumes" Jan 27 17:45:14 crc kubenswrapper[5049]: I0127 17:45:14.994667 5049 scope.go:117] "RemoveContainer" containerID="083764d7f785cc193b914b87b0ca158cb847a0424b9e9ebaf94e54ce0439e5ad" Jan 27 17:45:17 crc kubenswrapper[5049]: I0127 17:45:17.781432 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:45:17 crc kubenswrapper[5049]: I0127 17:45:17.781975 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:45:47 crc kubenswrapper[5049]: I0127 17:45:47.781762 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:45:47 crc kubenswrapper[5049]: I0127 17:45:47.782390 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:45:47 crc kubenswrapper[5049]: I0127 17:45:47.782462 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:45:47 crc kubenswrapper[5049]: I0127 17:45:47.783077 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82a206bc083b77e2da8141673ab7495d9eea3b1a439feb7ffc5dc004dcb1865c"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:45:47 crc kubenswrapper[5049]: I0127 17:45:47.783141 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://82a206bc083b77e2da8141673ab7495d9eea3b1a439feb7ffc5dc004dcb1865c" gracePeriod=600 Jan 27 17:45:48 crc kubenswrapper[5049]: I0127 17:45:48.523514 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="82a206bc083b77e2da8141673ab7495d9eea3b1a439feb7ffc5dc004dcb1865c" exitCode=0 Jan 27 17:45:48 crc kubenswrapper[5049]: I0127 17:45:48.523578 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"82a206bc083b77e2da8141673ab7495d9eea3b1a439feb7ffc5dc004dcb1865c"} Jan 27 17:45:48 crc kubenswrapper[5049]: I0127 17:45:48.523802 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649"} Jan 27 17:45:48 crc kubenswrapper[5049]: I0127 17:45:48.523823 5049 scope.go:117] "RemoveContainer" containerID="6429cf43def1592a9816e54a5b8ca62a02f216bba09b4921d78662c98f1492a5" Jan 27 17:48:17 crc kubenswrapper[5049]: I0127 17:48:17.781254 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:48:17 crc kubenswrapper[5049]: I0127 17:48:17.782141 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:48:47 crc kubenswrapper[5049]: I0127 17:48:47.781763 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:48:47 crc kubenswrapper[5049]: I0127 17:48:47.782467 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:49:17 crc kubenswrapper[5049]: I0127 17:49:17.781968 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:49:17 crc kubenswrapper[5049]: I0127 17:49:17.782884 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:49:17 crc kubenswrapper[5049]: I0127 17:49:17.782957 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:49:17 crc kubenswrapper[5049]: I0127 17:49:17.783891 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:49:17 crc kubenswrapper[5049]: I0127 17:49:17.783991 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" gracePeriod=600 Jan 27 17:49:17 crc kubenswrapper[5049]: E0127 17:49:17.915746 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:49:18 crc kubenswrapper[5049]: I0127 17:49:18.331930 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" exitCode=0 Jan 27 17:49:18 crc kubenswrapper[5049]: I0127 17:49:18.331976 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649"} Jan 27 17:49:18 crc kubenswrapper[5049]: I0127 17:49:18.332011 5049 scope.go:117] "RemoveContainer" containerID="82a206bc083b77e2da8141673ab7495d9eea3b1a439feb7ffc5dc004dcb1865c" Jan 27 17:49:18 crc kubenswrapper[5049]: I0127 17:49:18.333431 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:49:18 crc kubenswrapper[5049]: E0127 17:49:18.333948 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:49:28 crc kubenswrapper[5049]: I0127 17:49:28.646969 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:49:28 crc kubenswrapper[5049]: E0127 17:49:28.647866 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:49:39 crc kubenswrapper[5049]: I0127 17:49:39.646128 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:49:39 crc kubenswrapper[5049]: E0127 17:49:39.647399 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:49:53 crc kubenswrapper[5049]: I0127 17:49:53.647096 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:49:53 crc kubenswrapper[5049]: E0127 17:49:53.648303 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:50:08 crc kubenswrapper[5049]: I0127 17:50:08.646240 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:50:08 crc kubenswrapper[5049]: E0127 17:50:08.647306 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:50:17 crc kubenswrapper[5049]: I0127 17:50:17.758526 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tpgvp"] Jan 27 17:50:17 crc kubenswrapper[5049]: E0127 17:50:17.759272 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5af233b-c094-44b3-bcee-89cd3f34d4b9" containerName="collect-profiles" Jan 27 17:50:17 crc kubenswrapper[5049]: I0127 17:50:17.759286 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5af233b-c094-44b3-bcee-89cd3f34d4b9" containerName="collect-profiles" Jan 27 17:50:17 crc kubenswrapper[5049]: I0127 17:50:17.759437 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5af233b-c094-44b3-bcee-89cd3f34d4b9" containerName="collect-profiles" Jan 27 17:50:17 crc kubenswrapper[5049]: I0127 17:50:17.760465 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:17 crc kubenswrapper[5049]: I0127 17:50:17.782019 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tpgvp"] Jan 27 17:50:17 crc kubenswrapper[5049]: I0127 17:50:17.916103 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-catalog-content\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:17 crc kubenswrapper[5049]: I0127 17:50:17.916284 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-utilities\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:17 crc kubenswrapper[5049]: I0127 17:50:17.916321 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsv94\" (UniqueName: \"kubernetes.io/projected/c0c08c39-aed8-4e57-a288-e37b7eed2793-kube-api-access-dsv94\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:18 crc kubenswrapper[5049]: I0127 17:50:18.017834 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-utilities\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:18 crc kubenswrapper[5049]: I0127 17:50:18.017885 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsv94\" (UniqueName: \"kubernetes.io/projected/c0c08c39-aed8-4e57-a288-e37b7eed2793-kube-api-access-dsv94\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:18 crc kubenswrapper[5049]: I0127 17:50:18.017943 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-catalog-content\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:18 crc kubenswrapper[5049]: I0127 17:50:18.018298 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-utilities\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:18 crc kubenswrapper[5049]: I0127 17:50:18.018376 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-catalog-content\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:18 crc kubenswrapper[5049]: I0127 17:50:18.045335 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsv94\" (UniqueName: \"kubernetes.io/projected/c0c08c39-aed8-4e57-a288-e37b7eed2793-kube-api-access-dsv94\") pod \"redhat-operators-tpgvp\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:18 crc kubenswrapper[5049]: I0127 17:50:18.079228 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:18 crc kubenswrapper[5049]: I0127 17:50:18.437627 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tpgvp"] Jan 27 17:50:19 crc kubenswrapper[5049]: I0127 17:50:19.202209 5049 generic.go:334] "Generic (PLEG): container finished" podID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerID="a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0" exitCode=0 Jan 27 17:50:19 crc kubenswrapper[5049]: I0127 17:50:19.202318 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgvp" event={"ID":"c0c08c39-aed8-4e57-a288-e37b7eed2793","Type":"ContainerDied","Data":"a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0"} Jan 27 17:50:19 crc kubenswrapper[5049]: I0127 17:50:19.202484 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgvp" event={"ID":"c0c08c39-aed8-4e57-a288-e37b7eed2793","Type":"ContainerStarted","Data":"bc748ceeaf068d8f8c219390006f62aeef7d1a49a7749f96aee6eb77a3505be8"} Jan 27 17:50:19 crc kubenswrapper[5049]: I0127 17:50:19.204073 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:50:20 crc kubenswrapper[5049]: I0127 17:50:20.645974 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:50:20 crc kubenswrapper[5049]: E0127 17:50:20.646761 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:50:21 crc kubenswrapper[5049]: I0127 17:50:21.215735 5049 generic.go:334] "Generic (PLEG): container finished" podID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerID="0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060" exitCode=0 Jan 27 17:50:21 crc kubenswrapper[5049]: I0127 17:50:21.215926 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgvp" event={"ID":"c0c08c39-aed8-4e57-a288-e37b7eed2793","Type":"ContainerDied","Data":"0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060"} Jan 27 17:50:22 crc kubenswrapper[5049]: I0127 17:50:22.223744 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgvp" event={"ID":"c0c08c39-aed8-4e57-a288-e37b7eed2793","Type":"ContainerStarted","Data":"3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f"} Jan 27 17:50:22 crc kubenswrapper[5049]: I0127 17:50:22.249131 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tpgvp" podStartSLOduration=2.671886203 podStartE2EDuration="5.249108455s" podCreationTimestamp="2026-01-27 17:50:17 +0000 UTC" firstStartedPulling="2026-01-27 17:50:19.203809295 +0000 UTC m=+3194.302782834" lastFinishedPulling="2026-01-27 17:50:21.781031537 +0000 UTC m=+3196.880005086" observedRunningTime="2026-01-27 17:50:22.246373491 +0000 UTC m=+3197.345347030" watchObservedRunningTime="2026-01-27 17:50:22.249108455 +0000 UTC m=+3197.348082014" Jan 27 17:50:28 crc kubenswrapper[5049]: I0127 17:50:28.080914 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:28 crc kubenswrapper[5049]: I0127 17:50:28.081416 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:29 crc kubenswrapper[5049]: I0127 17:50:29.122507 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tpgvp" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="registry-server" probeResult="failure" output=< Jan 27 17:50:29 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 17:50:29 crc kubenswrapper[5049]: > Jan 27 17:50:33 crc kubenswrapper[5049]: I0127 17:50:33.646486 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:50:33 crc kubenswrapper[5049]: E0127 17:50:33.647925 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:50:38 crc kubenswrapper[5049]: I0127 17:50:38.157815 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:38 crc kubenswrapper[5049]: I0127 17:50:38.226945 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:38 crc kubenswrapper[5049]: I0127 17:50:38.411843 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tpgvp"] Jan 27 17:50:39 crc kubenswrapper[5049]: I0127 17:50:39.381266 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tpgvp" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="registry-server" containerID="cri-o://3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f" gracePeriod=2 Jan 27 17:50:39 crc kubenswrapper[5049]: I0127 17:50:39.847886 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:39 crc kubenswrapper[5049]: I0127 17:50:39.906550 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-catalog-content\") pod \"c0c08c39-aed8-4e57-a288-e37b7eed2793\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " Jan 27 17:50:39 crc kubenswrapper[5049]: I0127 17:50:39.910058 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-utilities\") pod \"c0c08c39-aed8-4e57-a288-e37b7eed2793\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " Jan 27 17:50:39 crc kubenswrapper[5049]: I0127 17:50:39.910102 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsv94\" (UniqueName: \"kubernetes.io/projected/c0c08c39-aed8-4e57-a288-e37b7eed2793-kube-api-access-dsv94\") pod \"c0c08c39-aed8-4e57-a288-e37b7eed2793\" (UID: \"c0c08c39-aed8-4e57-a288-e37b7eed2793\") " Jan 27 17:50:39 crc kubenswrapper[5049]: I0127 17:50:39.911853 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-utilities" (OuterVolumeSpecName: "utilities") pod "c0c08c39-aed8-4e57-a288-e37b7eed2793" (UID: "c0c08c39-aed8-4e57-a288-e37b7eed2793"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:50:39 crc kubenswrapper[5049]: I0127 17:50:39.916515 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c08c39-aed8-4e57-a288-e37b7eed2793-kube-api-access-dsv94" (OuterVolumeSpecName: "kube-api-access-dsv94") pod "c0c08c39-aed8-4e57-a288-e37b7eed2793" (UID: "c0c08c39-aed8-4e57-a288-e37b7eed2793"). InnerVolumeSpecName "kube-api-access-dsv94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.013602 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.013635 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsv94\" (UniqueName: \"kubernetes.io/projected/c0c08c39-aed8-4e57-a288-e37b7eed2793-kube-api-access-dsv94\") on node \"crc\" DevicePath \"\"" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.033715 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0c08c39-aed8-4e57-a288-e37b7eed2793" (UID: "c0c08c39-aed8-4e57-a288-e37b7eed2793"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.114730 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0c08c39-aed8-4e57-a288-e37b7eed2793-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.402951 5049 generic.go:334] "Generic (PLEG): container finished" podID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerID="3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f" exitCode=0 Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.403064 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgvp" event={"ID":"c0c08c39-aed8-4e57-a288-e37b7eed2793","Type":"ContainerDied","Data":"3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f"} Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.403080 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tpgvp" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.403110 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tpgvp" event={"ID":"c0c08c39-aed8-4e57-a288-e37b7eed2793","Type":"ContainerDied","Data":"bc748ceeaf068d8f8c219390006f62aeef7d1a49a7749f96aee6eb77a3505be8"} Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.403142 5049 scope.go:117] "RemoveContainer" containerID="3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.437429 5049 scope.go:117] "RemoveContainer" containerID="0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.489970 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tpgvp"] Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.493638 5049 scope.go:117] "RemoveContainer" containerID="a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.497139 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tpgvp"] Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.524345 5049 scope.go:117] "RemoveContainer" containerID="3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f" Jan 27 17:50:40 crc kubenswrapper[5049]: E0127 17:50:40.524811 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f\": container with ID starting with 3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f not found: ID does not exist" containerID="3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.524860 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f"} err="failed to get container status \"3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f\": rpc error: code = NotFound desc = could not find container \"3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f\": container with ID starting with 3960f1712616526d7f74805350ebbf23101ee3ab109d9eb27dec37568e56fa4f not found: ID does not exist" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.524893 5049 scope.go:117] "RemoveContainer" containerID="0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060" Jan 27 17:50:40 crc kubenswrapper[5049]: E0127 17:50:40.525318 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060\": container with ID starting with 0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060 not found: ID does not exist" containerID="0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.525346 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060"} err="failed to get container status \"0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060\": rpc error: code = NotFound desc = could not find container \"0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060\": container with ID starting with 0c30b3c25910dcff8415c13540c1372905b016837d09c461ffc90de8991cb060 not found: ID does not exist" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.525366 5049 scope.go:117] "RemoveContainer" containerID="a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0" Jan 27 17:50:40 crc kubenswrapper[5049]: E0127 17:50:40.525977 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0\": container with ID starting with a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0 not found: ID does not exist" containerID="a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0" Jan 27 17:50:40 crc kubenswrapper[5049]: I0127 17:50:40.526051 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0"} err="failed to get container status \"a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0\": rpc error: code = NotFound desc = could not find container \"a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0\": container with ID starting with a175f2a165de7f2ec3a1bc33b462e68655fd523b69d58dd1a4a1c047e5441eb0 not found: ID does not exist" Jan 27 17:50:41 crc kubenswrapper[5049]: I0127 17:50:41.662928 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" path="/var/lib/kubelet/pods/c0c08c39-aed8-4e57-a288-e37b7eed2793/volumes" Jan 27 17:50:47 crc kubenswrapper[5049]: I0127 17:50:47.646294 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:50:47 crc kubenswrapper[5049]: E0127 17:50:47.647147 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:51:01 crc kubenswrapper[5049]: I0127 17:51:01.646017 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:51:01 crc kubenswrapper[5049]: E0127 17:51:01.647044 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:51:14 crc kubenswrapper[5049]: I0127 17:51:14.645884 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:51:14 crc kubenswrapper[5049]: E0127 17:51:14.646604 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:51:25 crc kubenswrapper[5049]: I0127 17:51:25.651507 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:51:25 crc kubenswrapper[5049]: E0127 17:51:25.652954 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:51:37 crc kubenswrapper[5049]: I0127 17:51:37.647044 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:51:37 crc kubenswrapper[5049]: E0127 17:51:37.648401 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:51:48 crc kubenswrapper[5049]: I0127 17:51:48.646929 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:51:48 crc kubenswrapper[5049]: E0127 17:51:48.649910 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:52:02 crc kubenswrapper[5049]: I0127 17:52:02.647111 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:52:02 crc kubenswrapper[5049]: E0127 17:52:02.648196 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:52:14 crc kubenswrapper[5049]: I0127 17:52:14.645709 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:52:14 crc kubenswrapper[5049]: E0127 17:52:14.646608 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:52:27 crc kubenswrapper[5049]: I0127 17:52:27.646141 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:52:27 crc kubenswrapper[5049]: E0127 17:52:27.646940 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:52:38 crc kubenswrapper[5049]: I0127 17:52:38.646325 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:52:38 crc kubenswrapper[5049]: E0127 17:52:38.647210 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:52:49 crc kubenswrapper[5049]: I0127 17:52:49.645662 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:52:49 crc kubenswrapper[5049]: E0127 17:52:49.647330 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:53:00 crc kubenswrapper[5049]: I0127 17:53:00.646147 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:53:00 crc kubenswrapper[5049]: E0127 17:53:00.647184 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:53:11 crc kubenswrapper[5049]: I0127 17:53:11.646526 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:53:11 crc kubenswrapper[5049]: E0127 17:53:11.647636 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:53:22 crc kubenswrapper[5049]: I0127 17:53:22.645990 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:53:22 crc kubenswrapper[5049]: E0127 17:53:22.646842 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:53:36 crc kubenswrapper[5049]: I0127 17:53:36.648567 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:53:36 crc kubenswrapper[5049]: E0127 17:53:36.649717 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:53:50 crc kubenswrapper[5049]: I0127 17:53:50.646455 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:53:50 crc kubenswrapper[5049]: E0127 17:53:50.647175 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:54:03 crc kubenswrapper[5049]: I0127 17:54:03.646011 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:54:03 crc kubenswrapper[5049]: E0127 17:54:03.647183 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:54:16 crc kubenswrapper[5049]: I0127 17:54:16.645718 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:54:16 crc kubenswrapper[5049]: E0127 17:54:16.646538 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 17:54:27 crc kubenswrapper[5049]: I0127 17:54:27.646371 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 17:54:28 crc kubenswrapper[5049]: I0127 17:54:28.450378 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"da1017db3620f8ca96212deeac23951ff27fbdca907d2ca5fa295cd64575db3c"} Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.006017 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hm8jt"] Jan 27 17:55:16 crc kubenswrapper[5049]: E0127 17:55:16.006955 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="extract-content" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.006979 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="extract-content" Jan 27 17:55:16 crc kubenswrapper[5049]: E0127 17:55:16.007017 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="registry-server" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.007028 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="registry-server" Jan 27 17:55:16 crc kubenswrapper[5049]: E0127 17:55:16.007042 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="extract-utilities" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.007053 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="extract-utilities" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.007258 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c08c39-aed8-4e57-a288-e37b7eed2793" containerName="registry-server" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.009080 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.011190 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm8jt"] Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.075651 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-catalog-content\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.075720 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8pm\" (UniqueName: \"kubernetes.io/projected/2ca0ae7c-2c07-4eb1-8fcf-087183522139-kube-api-access-jj8pm\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.075801 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-utilities\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.177050 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-catalog-content\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.177117 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8pm\" (UniqueName: \"kubernetes.io/projected/2ca0ae7c-2c07-4eb1-8fcf-087183522139-kube-api-access-jj8pm\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.177176 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-utilities\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.177760 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-utilities\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.177850 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-catalog-content\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.199663 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8pm\" (UniqueName: \"kubernetes.io/projected/2ca0ae7c-2c07-4eb1-8fcf-087183522139-kube-api-access-jj8pm\") pod \"community-operators-hm8jt\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.345875 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.588236 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm8jt"] Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.882641 5049 generic.go:334] "Generic (PLEG): container finished" podID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerID="5234cb4668f4ae12600462c77339fa4a14c587ef4f723c5d32a758ed2d21e052" exitCode=0 Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.882722 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8jt" event={"ID":"2ca0ae7c-2c07-4eb1-8fcf-087183522139","Type":"ContainerDied","Data":"5234cb4668f4ae12600462c77339fa4a14c587ef4f723c5d32a758ed2d21e052"} Jan 27 17:55:16 crc kubenswrapper[5049]: I0127 17:55:16.882768 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8jt" event={"ID":"2ca0ae7c-2c07-4eb1-8fcf-087183522139","Type":"ContainerStarted","Data":"0c6c95152c7c774a93ef727fd3b98ececf832bda81abe625500541ee1cb73eac"} Jan 27 17:55:18 crc kubenswrapper[5049]: I0127 17:55:18.907226 5049 generic.go:334] "Generic (PLEG): container finished" podID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerID="022dcf48605bacd7d6b3ef43b48cdcedc0a7fa2e7339dc5907e5ce7ef0e47f06" exitCode=0 Jan 27 17:55:18 crc kubenswrapper[5049]: I0127 17:55:18.907323 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8jt" event={"ID":"2ca0ae7c-2c07-4eb1-8fcf-087183522139","Type":"ContainerDied","Data":"022dcf48605bacd7d6b3ef43b48cdcedc0a7fa2e7339dc5907e5ce7ef0e47f06"} Jan 27 17:55:19 crc kubenswrapper[5049]: I0127 17:55:19.918335 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8jt" event={"ID":"2ca0ae7c-2c07-4eb1-8fcf-087183522139","Type":"ContainerStarted","Data":"f61278089df555668be00a096fa3210686b71041948d99c08b800f6ea4c39644"} Jan 27 17:55:19 crc kubenswrapper[5049]: I0127 17:55:19.939322 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hm8jt" podStartSLOduration=2.23967821 podStartE2EDuration="4.939296672s" podCreationTimestamp="2026-01-27 17:55:15 +0000 UTC" firstStartedPulling="2026-01-27 17:55:16.883984481 +0000 UTC m=+3491.982958020" lastFinishedPulling="2026-01-27 17:55:19.583602923 +0000 UTC m=+3494.682576482" observedRunningTime="2026-01-27 17:55:19.932886501 +0000 UTC m=+3495.031860070" watchObservedRunningTime="2026-01-27 17:55:19.939296672 +0000 UTC m=+3495.038270241" Jan 27 17:55:26 crc kubenswrapper[5049]: I0127 17:55:26.347543 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:26 crc kubenswrapper[5049]: I0127 17:55:26.348446 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:26 crc kubenswrapper[5049]: I0127 17:55:26.428465 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:27 crc kubenswrapper[5049]: I0127 17:55:27.027121 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:27 crc kubenswrapper[5049]: I0127 17:55:27.100594 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hm8jt"] Jan 27 17:55:28 crc kubenswrapper[5049]: I0127 17:55:28.996142 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hm8jt" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerName="registry-server" containerID="cri-o://f61278089df555668be00a096fa3210686b71041948d99c08b800f6ea4c39644" gracePeriod=2 Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.012318 5049 generic.go:334] "Generic (PLEG): container finished" podID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerID="f61278089df555668be00a096fa3210686b71041948d99c08b800f6ea4c39644" exitCode=0 Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.012389 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8jt" event={"ID":"2ca0ae7c-2c07-4eb1-8fcf-087183522139","Type":"ContainerDied","Data":"f61278089df555668be00a096fa3210686b71041948d99c08b800f6ea4c39644"} Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.586978 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.751988 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj8pm\" (UniqueName: \"kubernetes.io/projected/2ca0ae7c-2c07-4eb1-8fcf-087183522139-kube-api-access-jj8pm\") pod \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.752038 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-catalog-content\") pod \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.752087 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-utilities\") pod \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\" (UID: \"2ca0ae7c-2c07-4eb1-8fcf-087183522139\") " Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.753094 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-utilities" (OuterVolumeSpecName: "utilities") pod "2ca0ae7c-2c07-4eb1-8fcf-087183522139" (UID: "2ca0ae7c-2c07-4eb1-8fcf-087183522139"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.760909 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca0ae7c-2c07-4eb1-8fcf-087183522139-kube-api-access-jj8pm" (OuterVolumeSpecName: "kube-api-access-jj8pm") pod "2ca0ae7c-2c07-4eb1-8fcf-087183522139" (UID: "2ca0ae7c-2c07-4eb1-8fcf-087183522139"). InnerVolumeSpecName "kube-api-access-jj8pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.803613 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ca0ae7c-2c07-4eb1-8fcf-087183522139" (UID: "2ca0ae7c-2c07-4eb1-8fcf-087183522139"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.853657 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.853707 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj8pm\" (UniqueName: \"kubernetes.io/projected/2ca0ae7c-2c07-4eb1-8fcf-087183522139-kube-api-access-jj8pm\") on node \"crc\" DevicePath \"\"" Jan 27 17:55:31 crc kubenswrapper[5049]: I0127 17:55:31.853718 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0ae7c-2c07-4eb1-8fcf-087183522139-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:55:32 crc kubenswrapper[5049]: I0127 17:55:32.024972 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm8jt" event={"ID":"2ca0ae7c-2c07-4eb1-8fcf-087183522139","Type":"ContainerDied","Data":"0c6c95152c7c774a93ef727fd3b98ececf832bda81abe625500541ee1cb73eac"} Jan 27 17:55:32 crc kubenswrapper[5049]: I0127 17:55:32.025314 5049 scope.go:117] "RemoveContainer" containerID="f61278089df555668be00a096fa3210686b71041948d99c08b800f6ea4c39644" Jan 27 17:55:32 crc kubenswrapper[5049]: I0127 17:55:32.025064 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm8jt" Jan 27 17:55:32 crc kubenswrapper[5049]: I0127 17:55:32.069781 5049 scope.go:117] "RemoveContainer" containerID="022dcf48605bacd7d6b3ef43b48cdcedc0a7fa2e7339dc5907e5ce7ef0e47f06" Jan 27 17:55:32 crc kubenswrapper[5049]: I0127 17:55:32.093848 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hm8jt"] Jan 27 17:55:32 crc kubenswrapper[5049]: I0127 17:55:32.102721 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hm8jt"] Jan 27 17:55:32 crc kubenswrapper[5049]: I0127 17:55:32.111193 5049 scope.go:117] "RemoveContainer" containerID="5234cb4668f4ae12600462c77339fa4a14c587ef4f723c5d32a758ed2d21e052" Jan 27 17:55:33 crc kubenswrapper[5049]: I0127 17:55:33.654826 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" path="/var/lib/kubelet/pods/2ca0ae7c-2c07-4eb1-8fcf-087183522139/volumes" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.409466 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vkcj2"] Jan 27 17:55:48 crc kubenswrapper[5049]: E0127 17:55:48.410590 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerName="extract-content" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.410611 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerName="extract-content" Jan 27 17:55:48 crc kubenswrapper[5049]: E0127 17:55:48.410630 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerName="registry-server" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.410640 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerName="registry-server" Jan 27 17:55:48 crc kubenswrapper[5049]: E0127 17:55:48.410694 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerName="extract-utilities" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.410708 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerName="extract-utilities" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.410947 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca0ae7c-2c07-4eb1-8fcf-087183522139" containerName="registry-server" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.412455 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.425341 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkcj2"] Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.527661 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkhb\" (UniqueName: \"kubernetes.io/projected/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-kube-api-access-jdkhb\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.527819 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-catalog-content\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.527846 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-utilities\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.629293 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-catalog-content\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.629361 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-catalog-content\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.629424 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-utilities\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.629795 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-utilities\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.629930 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdkhb\" (UniqueName: \"kubernetes.io/projected/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-kube-api-access-jdkhb\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.650380 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdkhb\" (UniqueName: \"kubernetes.io/projected/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-kube-api-access-jdkhb\") pod \"redhat-marketplace-vkcj2\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.742068 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:48 crc kubenswrapper[5049]: I0127 17:55:48.984826 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkcj2"] Jan 27 17:55:49 crc kubenswrapper[5049]: I0127 17:55:49.193945 5049 generic.go:334] "Generic (PLEG): container finished" podID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerID="fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7" exitCode=0 Jan 27 17:55:49 crc kubenswrapper[5049]: I0127 17:55:49.194008 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkcj2" event={"ID":"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b","Type":"ContainerDied","Data":"fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7"} Jan 27 17:55:49 crc kubenswrapper[5049]: I0127 17:55:49.194280 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkcj2" event={"ID":"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b","Type":"ContainerStarted","Data":"a02347eb1e30aab621afdc7e2342c28948703cacf94162642ec6b5ad966acc55"} Jan 27 17:55:49 crc kubenswrapper[5049]: I0127 17:55:49.197651 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 17:55:51 crc kubenswrapper[5049]: I0127 17:55:51.209845 5049 generic.go:334] "Generic (PLEG): container finished" podID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerID="d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff" exitCode=0 Jan 27 17:55:51 crc kubenswrapper[5049]: I0127 17:55:51.209889 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkcj2" event={"ID":"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b","Type":"ContainerDied","Data":"d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff"} Jan 27 17:55:52 crc kubenswrapper[5049]: I0127 17:55:52.222992 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkcj2" event={"ID":"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b","Type":"ContainerStarted","Data":"7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f"} Jan 27 17:55:52 crc kubenswrapper[5049]: I0127 17:55:52.255766 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vkcj2" podStartSLOduration=1.5475196759999998 podStartE2EDuration="4.255739702s" podCreationTimestamp="2026-01-27 17:55:48 +0000 UTC" firstStartedPulling="2026-01-27 17:55:49.197317693 +0000 UTC m=+3524.296291252" lastFinishedPulling="2026-01-27 17:55:51.905537689 +0000 UTC m=+3527.004511278" observedRunningTime="2026-01-27 17:55:52.25035028 +0000 UTC m=+3527.349323829" watchObservedRunningTime="2026-01-27 17:55:52.255739702 +0000 UTC m=+3527.354713291" Jan 27 17:55:58 crc kubenswrapper[5049]: I0127 17:55:58.743202 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:58 crc kubenswrapper[5049]: I0127 17:55:58.743911 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:58 crc kubenswrapper[5049]: I0127 17:55:58.804764 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:59 crc kubenswrapper[5049]: I0127 17:55:59.392870 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:55:59 crc kubenswrapper[5049]: I0127 17:55:59.465649 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkcj2"] Jan 27 17:56:01 crc kubenswrapper[5049]: I0127 17:56:01.320348 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vkcj2" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerName="registry-server" containerID="cri-o://7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f" gracePeriod=2 Jan 27 17:56:01 crc kubenswrapper[5049]: I0127 17:56:01.799029 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:56:01 crc kubenswrapper[5049]: I0127 17:56:01.962896 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-utilities\") pod \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " Jan 27 17:56:01 crc kubenswrapper[5049]: I0127 17:56:01.962953 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-catalog-content\") pod \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " Jan 27 17:56:01 crc kubenswrapper[5049]: I0127 17:56:01.963005 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdkhb\" (UniqueName: \"kubernetes.io/projected/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-kube-api-access-jdkhb\") pod \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\" (UID: \"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b\") " Jan 27 17:56:01 crc kubenswrapper[5049]: I0127 17:56:01.964713 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-utilities" (OuterVolumeSpecName: "utilities") pod "bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" (UID: "bd15580c-c0ac-4121-b62b-1ef9a0f8e29b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:56:01 crc kubenswrapper[5049]: I0127 17:56:01.968924 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-kube-api-access-jdkhb" (OuterVolumeSpecName: "kube-api-access-jdkhb") pod "bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" (UID: "bd15580c-c0ac-4121-b62b-1ef9a0f8e29b"). InnerVolumeSpecName "kube-api-access-jdkhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:56:01 crc kubenswrapper[5049]: I0127 17:56:01.994236 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" (UID: "bd15580c-c0ac-4121-b62b-1ef9a0f8e29b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.065114 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.065370 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.065455 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdkhb\" (UniqueName: \"kubernetes.io/projected/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b-kube-api-access-jdkhb\") on node \"crc\" DevicePath \"\"" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.329420 5049 generic.go:334] "Generic (PLEG): container finished" podID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerID="7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f" exitCode=0 Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.329523 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkcj2" event={"ID":"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b","Type":"ContainerDied","Data":"7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f"} Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.329918 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vkcj2" event={"ID":"bd15580c-c0ac-4121-b62b-1ef9a0f8e29b","Type":"ContainerDied","Data":"a02347eb1e30aab621afdc7e2342c28948703cacf94162642ec6b5ad966acc55"} Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.329564 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vkcj2" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.330010 5049 scope.go:117] "RemoveContainer" containerID="7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.364635 5049 scope.go:117] "RemoveContainer" containerID="d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.368802 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkcj2"] Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.382544 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vkcj2"] Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.399290 5049 scope.go:117] "RemoveContainer" containerID="fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.427891 5049 scope.go:117] "RemoveContainer" containerID="7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f" Jan 27 17:56:02 crc kubenswrapper[5049]: E0127 17:56:02.428517 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f\": container with ID starting with 7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f not found: ID does not exist" containerID="7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.428580 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f"} err="failed to get container status \"7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f\": rpc error: code = NotFound desc = could not find container \"7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f\": container with ID starting with 7397c89db9c5eb0b175e1dfa01fc9c842144c2dcc6c422e8d96c91ff8752993f not found: ID does not exist" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.428621 5049 scope.go:117] "RemoveContainer" containerID="d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff" Jan 27 17:56:02 crc kubenswrapper[5049]: E0127 17:56:02.429132 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff\": container with ID starting with d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff not found: ID does not exist" containerID="d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.429174 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff"} err="failed to get container status \"d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff\": rpc error: code = NotFound desc = could not find container \"d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff\": container with ID starting with d587a5116dc881d6f3e1c4a02fb0f0844f3aacb16f8dbdb46487c342175a54ff not found: ID does not exist" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.429200 5049 scope.go:117] "RemoveContainer" containerID="fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7" Jan 27 17:56:02 crc kubenswrapper[5049]: E0127 17:56:02.429494 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7\": container with ID starting with fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7 not found: ID does not exist" containerID="fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.429537 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7"} err="failed to get container status \"fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7\": rpc error: code = NotFound desc = could not find container \"fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7\": container with ID starting with fca161d5082d24e39391902c7004b94d13a3f035e393c8e7f675c39c3248fce7 not found: ID does not exist" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.460973 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qlcq4"] Jan 27 17:56:02 crc kubenswrapper[5049]: E0127 17:56:02.461417 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerName="extract-utilities" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.461436 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerName="extract-utilities" Jan 27 17:56:02 crc kubenswrapper[5049]: E0127 17:56:02.461453 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerName="registry-server" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.461461 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerName="registry-server" Jan 27 17:56:02 crc kubenswrapper[5049]: E0127 17:56:02.461476 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerName="extract-content" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.461484 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerName="extract-content" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.461744 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" containerName="registry-server" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.463058 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.472977 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qlcq4"] Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.574243 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-catalog-content\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.574384 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s244g\" (UniqueName: \"kubernetes.io/projected/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-kube-api-access-s244g\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.574420 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-utilities\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.675784 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-catalog-content\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.675867 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s244g\" (UniqueName: \"kubernetes.io/projected/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-kube-api-access-s244g\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.675885 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-utilities\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.676361 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-catalog-content\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.676385 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-utilities\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.695367 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s244g\" (UniqueName: \"kubernetes.io/projected/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-kube-api-access-s244g\") pod \"certified-operators-qlcq4\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:02 crc kubenswrapper[5049]: I0127 17:56:02.791736 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:03 crc kubenswrapper[5049]: I0127 17:56:03.095230 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qlcq4"] Jan 27 17:56:03 crc kubenswrapper[5049]: I0127 17:56:03.340547 5049 generic.go:334] "Generic (PLEG): container finished" podID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerID="bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd" exitCode=0 Jan 27 17:56:03 crc kubenswrapper[5049]: I0127 17:56:03.340589 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlcq4" event={"ID":"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f","Type":"ContainerDied","Data":"bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd"} Jan 27 17:56:03 crc kubenswrapper[5049]: I0127 17:56:03.340620 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlcq4" event={"ID":"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f","Type":"ContainerStarted","Data":"c1178249a53e27b2162fa71b30355c6db9467a1740937824e5c73b572f15187a"} Jan 27 17:56:03 crc kubenswrapper[5049]: I0127 17:56:03.668075 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd15580c-c0ac-4121-b62b-1ef9a0f8e29b" path="/var/lib/kubelet/pods/bd15580c-c0ac-4121-b62b-1ef9a0f8e29b/volumes" Jan 27 17:56:05 crc kubenswrapper[5049]: I0127 17:56:05.366236 5049 generic.go:334] "Generic (PLEG): container finished" podID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerID="8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15" exitCode=0 Jan 27 17:56:05 crc kubenswrapper[5049]: I0127 17:56:05.366606 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlcq4" event={"ID":"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f","Type":"ContainerDied","Data":"8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15"} Jan 27 17:56:07 crc kubenswrapper[5049]: I0127 17:56:07.384984 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlcq4" event={"ID":"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f","Type":"ContainerStarted","Data":"3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592"} Jan 27 17:56:07 crc kubenswrapper[5049]: I0127 17:56:07.419767 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qlcq4" podStartSLOduration=2.849035592 podStartE2EDuration="5.419733618s" podCreationTimestamp="2026-01-27 17:56:02 +0000 UTC" firstStartedPulling="2026-01-27 17:56:03.342347022 +0000 UTC m=+3538.441320571" lastFinishedPulling="2026-01-27 17:56:05.913045008 +0000 UTC m=+3541.012018597" observedRunningTime="2026-01-27 17:56:07.416043813 +0000 UTC m=+3542.515017422" watchObservedRunningTime="2026-01-27 17:56:07.419733618 +0000 UTC m=+3542.518707217" Jan 27 17:56:12 crc kubenswrapper[5049]: I0127 17:56:12.792633 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:12 crc kubenswrapper[5049]: I0127 17:56:12.793030 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:12 crc kubenswrapper[5049]: I0127 17:56:12.841411 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:13 crc kubenswrapper[5049]: I0127 17:56:13.507083 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:13 crc kubenswrapper[5049]: I0127 17:56:13.574155 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qlcq4"] Jan 27 17:56:15 crc kubenswrapper[5049]: I0127 17:56:15.472766 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qlcq4" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerName="registry-server" containerID="cri-o://3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592" gracePeriod=2 Jan 27 17:56:15 crc kubenswrapper[5049]: I0127 17:56:15.907318 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:15 crc kubenswrapper[5049]: I0127 17:56:15.978546 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-utilities\") pod \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " Jan 27 17:56:15 crc kubenswrapper[5049]: I0127 17:56:15.978714 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-catalog-content\") pod \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " Jan 27 17:56:15 crc kubenswrapper[5049]: I0127 17:56:15.978760 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s244g\" (UniqueName: \"kubernetes.io/projected/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-kube-api-access-s244g\") pod \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\" (UID: \"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f\") " Jan 27 17:56:15 crc kubenswrapper[5049]: I0127 17:56:15.980124 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-utilities" (OuterVolumeSpecName: "utilities") pod "5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" (UID: "5d6ab6fd-8a03-473e-82b2-e225ddb2df5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:56:15 crc kubenswrapper[5049]: I0127 17:56:15.984155 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-kube-api-access-s244g" (OuterVolumeSpecName: "kube-api-access-s244g") pod "5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" (UID: "5d6ab6fd-8a03-473e-82b2-e225ddb2df5f"). InnerVolumeSpecName "kube-api-access-s244g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.046221 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" (UID: "5d6ab6fd-8a03-473e-82b2-e225ddb2df5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.081174 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.081215 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.081232 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s244g\" (UniqueName: \"kubernetes.io/projected/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f-kube-api-access-s244g\") on node \"crc\" DevicePath \"\"" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.487103 5049 generic.go:334] "Generic (PLEG): container finished" podID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerID="3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592" exitCode=0 Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.487197 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlcq4" event={"ID":"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f","Type":"ContainerDied","Data":"3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592"} Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.487325 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qlcq4" event={"ID":"5d6ab6fd-8a03-473e-82b2-e225ddb2df5f","Type":"ContainerDied","Data":"c1178249a53e27b2162fa71b30355c6db9467a1740937824e5c73b572f15187a"} Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.487372 5049 scope.go:117] "RemoveContainer" containerID="3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.487248 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qlcq4" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.523242 5049 scope.go:117] "RemoveContainer" containerID="8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.569700 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qlcq4"] Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.581007 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qlcq4"] Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.589131 5049 scope.go:117] "RemoveContainer" containerID="bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.607493 5049 scope.go:117] "RemoveContainer" containerID="3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592" Jan 27 17:56:16 crc kubenswrapper[5049]: E0127 17:56:16.607932 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592\": container with ID starting with 3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592 not found: ID does not exist" containerID="3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.607979 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592"} err="failed to get container status \"3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592\": rpc error: code = NotFound desc = could not find container \"3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592\": container with ID starting with 3f754f1f86e03efb9708615e22444817a9165eb67c99132b3d2f3ceb590a8592 not found: ID does not exist" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.608008 5049 scope.go:117] "RemoveContainer" containerID="8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15" Jan 27 17:56:16 crc kubenswrapper[5049]: E0127 17:56:16.608393 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15\": container with ID starting with 8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15 not found: ID does not exist" containerID="8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.608452 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15"} err="failed to get container status \"8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15\": rpc error: code = NotFound desc = could not find container \"8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15\": container with ID starting with 8dc45e488c7c64c1459833ea9c4b8c80f6367d11ef54980fcaedd6c4c0950a15 not found: ID does not exist" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.608488 5049 scope.go:117] "RemoveContainer" containerID="bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd" Jan 27 17:56:16 crc kubenswrapper[5049]: E0127 17:56:16.608973 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd\": container with ID starting with bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd not found: ID does not exist" containerID="bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd" Jan 27 17:56:16 crc kubenswrapper[5049]: I0127 17:56:16.609012 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd"} err="failed to get container status \"bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd\": rpc error: code = NotFound desc = could not find container \"bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd\": container with ID starting with bbbfb892957b2b9857c36990e63b9b878feabcd6147e7f007cd31e1dec08a7dd not found: ID does not exist" Jan 27 17:56:17 crc kubenswrapper[5049]: I0127 17:56:17.664929 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" path="/var/lib/kubelet/pods/5d6ab6fd-8a03-473e-82b2-e225ddb2df5f/volumes" Jan 27 17:56:47 crc kubenswrapper[5049]: I0127 17:56:47.781583 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:56:47 crc kubenswrapper[5049]: I0127 17:56:47.782243 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:57:17 crc kubenswrapper[5049]: I0127 17:57:17.782266 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:57:17 crc kubenswrapper[5049]: I0127 17:57:17.782760 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:57:47 crc kubenswrapper[5049]: I0127 17:57:47.781654 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 17:57:47 crc kubenswrapper[5049]: I0127 17:57:47.782336 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 17:57:47 crc kubenswrapper[5049]: I0127 17:57:47.782403 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 17:57:47 crc kubenswrapper[5049]: I0127 17:57:47.783362 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da1017db3620f8ca96212deeac23951ff27fbdca907d2ca5fa295cd64575db3c"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 17:57:47 crc kubenswrapper[5049]: I0127 17:57:47.783438 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://da1017db3620f8ca96212deeac23951ff27fbdca907d2ca5fa295cd64575db3c" gracePeriod=600 Jan 27 17:57:48 crc kubenswrapper[5049]: I0127 17:57:48.268368 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="da1017db3620f8ca96212deeac23951ff27fbdca907d2ca5fa295cd64575db3c" exitCode=0 Jan 27 17:57:48 crc kubenswrapper[5049]: I0127 17:57:48.268767 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"da1017db3620f8ca96212deeac23951ff27fbdca907d2ca5fa295cd64575db3c"} Jan 27 17:57:48 crc kubenswrapper[5049]: I0127 17:57:48.268799 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7"} Jan 27 17:57:48 crc kubenswrapper[5049]: I0127 17:57:48.268818 5049 scope.go:117] "RemoveContainer" containerID="e4c80b8013a4e59ff9c90fd92828bb5827fc8a9beb10eed190efe4e0af9ce649" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.167478 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq"] Jan 27 18:00:00 crc kubenswrapper[5049]: E0127 18:00:00.168303 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerName="registry-server" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.168317 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerName="registry-server" Jan 27 18:00:00 crc kubenswrapper[5049]: E0127 18:00:00.168329 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerName="extract-utilities" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.168335 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerName="extract-utilities" Jan 27 18:00:00 crc kubenswrapper[5049]: E0127 18:00:00.168356 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerName="extract-content" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.168362 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerName="extract-content" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.168476 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6ab6fd-8a03-473e-82b2-e225ddb2df5f" containerName="registry-server" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.168914 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.170440 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.182771 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq"] Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.188275 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.335664 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps9fg\" (UniqueName: \"kubernetes.io/projected/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-kube-api-access-ps9fg\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.335728 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-secret-volume\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.335776 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-config-volume\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.437186 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps9fg\" (UniqueName: \"kubernetes.io/projected/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-kube-api-access-ps9fg\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.437254 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-secret-volume\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.437356 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-config-volume\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.438781 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-config-volume\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.442995 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-secret-volume\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.463091 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps9fg\" (UniqueName: \"kubernetes.io/projected/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-kube-api-access-ps9fg\") pod \"collect-profiles-29492280-ccpmq\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.526081 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:00 crc kubenswrapper[5049]: I0127 18:00:00.934596 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq"] Jan 27 18:00:01 crc kubenswrapper[5049]: I0127 18:00:01.327560 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" event={"ID":"7a4812a4-0a7d-494d-ad4f-c8350c518fbd","Type":"ContainerStarted","Data":"646eb0a2de210109356a19fdb5da489f62c378817e42b2e9db68dd5b2c3d026d"} Jan 27 18:00:01 crc kubenswrapper[5049]: I0127 18:00:01.327892 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" event={"ID":"7a4812a4-0a7d-494d-ad4f-c8350c518fbd","Type":"ContainerStarted","Data":"f78de52ffd6e01e84802bb68bb7e7ff650a0eb7c50aeb949cf34d6f2f3fa6d34"} Jan 27 18:00:01 crc kubenswrapper[5049]: I0127 18:00:01.369487 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" podStartSLOduration=1.3694698 podStartE2EDuration="1.3694698s" podCreationTimestamp="2026-01-27 18:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:00:01.364144179 +0000 UTC m=+3776.463117738" watchObservedRunningTime="2026-01-27 18:00:01.3694698 +0000 UTC m=+3776.468443349" Jan 27 18:00:02 crc kubenswrapper[5049]: I0127 18:00:02.334984 5049 generic.go:334] "Generic (PLEG): container finished" podID="7a4812a4-0a7d-494d-ad4f-c8350c518fbd" containerID="646eb0a2de210109356a19fdb5da489f62c378817e42b2e9db68dd5b2c3d026d" exitCode=0 Jan 27 18:00:02 crc kubenswrapper[5049]: I0127 18:00:02.335036 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" event={"ID":"7a4812a4-0a7d-494d-ad4f-c8350c518fbd","Type":"ContainerDied","Data":"646eb0a2de210109356a19fdb5da489f62c378817e42b2e9db68dd5b2c3d026d"} Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.652213 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.699382 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps9fg\" (UniqueName: \"kubernetes.io/projected/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-kube-api-access-ps9fg\") pod \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.699521 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-secret-volume\") pod \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.699660 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-config-volume\") pod \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\" (UID: \"7a4812a4-0a7d-494d-ad4f-c8350c518fbd\") " Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.701794 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a4812a4-0a7d-494d-ad4f-c8350c518fbd" (UID: "7a4812a4-0a7d-494d-ad4f-c8350c518fbd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.705447 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-kube-api-access-ps9fg" (OuterVolumeSpecName: "kube-api-access-ps9fg") pod "7a4812a4-0a7d-494d-ad4f-c8350c518fbd" (UID: "7a4812a4-0a7d-494d-ad4f-c8350c518fbd"). InnerVolumeSpecName "kube-api-access-ps9fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.706209 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a4812a4-0a7d-494d-ad4f-c8350c518fbd" (UID: "7a4812a4-0a7d-494d-ad4f-c8350c518fbd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.800467 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.800502 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps9fg\" (UniqueName: \"kubernetes.io/projected/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-kube-api-access-ps9fg\") on node \"crc\" DevicePath \"\"" Jan 27 18:00:03 crc kubenswrapper[5049]: I0127 18:00:03.800514 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a4812a4-0a7d-494d-ad4f-c8350c518fbd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:00:04 crc kubenswrapper[5049]: I0127 18:00:04.352000 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" event={"ID":"7a4812a4-0a7d-494d-ad4f-c8350c518fbd","Type":"ContainerDied","Data":"f78de52ffd6e01e84802bb68bb7e7ff650a0eb7c50aeb949cf34d6f2f3fa6d34"} Jan 27 18:00:04 crc kubenswrapper[5049]: I0127 18:00:04.352061 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78de52ffd6e01e84802bb68bb7e7ff650a0eb7c50aeb949cf34d6f2f3fa6d34" Jan 27 18:00:04 crc kubenswrapper[5049]: I0127 18:00:04.352140 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq" Jan 27 18:00:04 crc kubenswrapper[5049]: I0127 18:00:04.753460 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t"] Jan 27 18:00:04 crc kubenswrapper[5049]: I0127 18:00:04.760876 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492235-mww8t"] Jan 27 18:00:05 crc kubenswrapper[5049]: I0127 18:00:05.662540 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fc45544-83c9-47c9-b26b-0e6cbeffc816" path="/var/lib/kubelet/pods/5fc45544-83c9-47c9-b26b-0e6cbeffc816/volumes" Jan 27 18:00:15 crc kubenswrapper[5049]: I0127 18:00:15.476203 5049 scope.go:117] "RemoveContainer" containerID="868790039cac7fb648d826c8f3b581bb9055b9eab643e750264daddedf3227a5" Jan 27 18:00:17 crc kubenswrapper[5049]: I0127 18:00:17.782297 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:00:17 crc kubenswrapper[5049]: I0127 18:00:17.783029 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:00:47 crc kubenswrapper[5049]: I0127 18:00:47.781744 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:00:47 crc kubenswrapper[5049]: I0127 18:00:47.782399 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.132189 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fbrx"] Jan 27 18:01:13 crc kubenswrapper[5049]: E0127 18:01:13.133091 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4812a4-0a7d-494d-ad4f-c8350c518fbd" containerName="collect-profiles" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.133106 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4812a4-0a7d-494d-ad4f-c8350c518fbd" containerName="collect-profiles" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.133264 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4812a4-0a7d-494d-ad4f-c8350c518fbd" containerName="collect-profiles" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.134368 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.158845 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fbrx"] Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.306205 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-utilities\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.306300 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-catalog-content\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.306463 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4rl\" (UniqueName: \"kubernetes.io/projected/7df9afd1-fcb4-4e37-8b63-815680a51829-kube-api-access-fv4rl\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.408013 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4rl\" (UniqueName: \"kubernetes.io/projected/7df9afd1-fcb4-4e37-8b63-815680a51829-kube-api-access-fv4rl\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.408121 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-utilities\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.408168 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-catalog-content\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.408869 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-catalog-content\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.408948 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-utilities\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.428285 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4rl\" (UniqueName: \"kubernetes.io/projected/7df9afd1-fcb4-4e37-8b63-815680a51829-kube-api-access-fv4rl\") pod \"redhat-operators-4fbrx\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.460776 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.918393 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fbrx"] Jan 27 18:01:13 crc kubenswrapper[5049]: I0127 18:01:13.979256 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fbrx" event={"ID":"7df9afd1-fcb4-4e37-8b63-815680a51829","Type":"ContainerStarted","Data":"3c405d0379d9cc2e228056313bbc2a5f79ae6c36ab93e681714b22b259d6ce48"} Jan 27 18:01:14 crc kubenswrapper[5049]: I0127 18:01:14.991973 5049 generic.go:334] "Generic (PLEG): container finished" podID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerID="1bd16a862b2533104c5dd5a0eedf604ff635738fbe97caf6aeaac7e67bfddbbf" exitCode=0 Jan 27 18:01:14 crc kubenswrapper[5049]: I0127 18:01:14.992044 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fbrx" event={"ID":"7df9afd1-fcb4-4e37-8b63-815680a51829","Type":"ContainerDied","Data":"1bd16a862b2533104c5dd5a0eedf604ff635738fbe97caf6aeaac7e67bfddbbf"} Jan 27 18:01:14 crc kubenswrapper[5049]: I0127 18:01:14.996987 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:01:16 crc kubenswrapper[5049]: I0127 18:01:16.006853 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fbrx" event={"ID":"7df9afd1-fcb4-4e37-8b63-815680a51829","Type":"ContainerStarted","Data":"986ad1f00c67e4fec32a448eb9590cc9ddc744f987022d7f0cc3d1df25bb111a"} Jan 27 18:01:17 crc kubenswrapper[5049]: I0127 18:01:17.018762 5049 generic.go:334] "Generic (PLEG): container finished" podID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerID="986ad1f00c67e4fec32a448eb9590cc9ddc744f987022d7f0cc3d1df25bb111a" exitCode=0 Jan 27 18:01:17 crc kubenswrapper[5049]: I0127 18:01:17.018862 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fbrx" event={"ID":"7df9afd1-fcb4-4e37-8b63-815680a51829","Type":"ContainerDied","Data":"986ad1f00c67e4fec32a448eb9590cc9ddc744f987022d7f0cc3d1df25bb111a"} Jan 27 18:01:17 crc kubenswrapper[5049]: I0127 18:01:17.781282 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:01:17 crc kubenswrapper[5049]: I0127 18:01:17.781364 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:01:17 crc kubenswrapper[5049]: I0127 18:01:17.781421 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:01:17 crc kubenswrapper[5049]: I0127 18:01:17.782444 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:01:17 crc kubenswrapper[5049]: I0127 18:01:17.782611 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" gracePeriod=600 Jan 27 18:01:17 crc kubenswrapper[5049]: E0127 18:01:17.905320 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:01:18 crc kubenswrapper[5049]: I0127 18:01:18.030386 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" exitCode=0 Jan 27 18:01:18 crc kubenswrapper[5049]: I0127 18:01:18.030446 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7"} Jan 27 18:01:18 crc kubenswrapper[5049]: I0127 18:01:18.030504 5049 scope.go:117] "RemoveContainer" containerID="da1017db3620f8ca96212deeac23951ff27fbdca907d2ca5fa295cd64575db3c" Jan 27 18:01:18 crc kubenswrapper[5049]: I0127 18:01:18.031630 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:01:18 crc kubenswrapper[5049]: E0127 18:01:18.032024 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:01:19 crc kubenswrapper[5049]: I0127 18:01:19.043795 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fbrx" event={"ID":"7df9afd1-fcb4-4e37-8b63-815680a51829","Type":"ContainerStarted","Data":"09978a183324194e617197fb0671eb93478267dd074960eef079325f3a714a18"} Jan 27 18:01:19 crc kubenswrapper[5049]: I0127 18:01:19.086748 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fbrx" podStartSLOduration=2.646439623 podStartE2EDuration="6.086723894s" podCreationTimestamp="2026-01-27 18:01:13 +0000 UTC" firstStartedPulling="2026-01-27 18:01:14.996570737 +0000 UTC m=+3850.095544326" lastFinishedPulling="2026-01-27 18:01:18.436855038 +0000 UTC m=+3853.535828597" observedRunningTime="2026-01-27 18:01:19.075026974 +0000 UTC m=+3854.174000613" watchObservedRunningTime="2026-01-27 18:01:19.086723894 +0000 UTC m=+3854.185697483" Jan 27 18:01:23 crc kubenswrapper[5049]: I0127 18:01:23.461926 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:23 crc kubenswrapper[5049]: I0127 18:01:23.462957 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:24 crc kubenswrapper[5049]: I0127 18:01:24.545959 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4fbrx" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="registry-server" probeResult="failure" output=< Jan 27 18:01:24 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:01:24 crc kubenswrapper[5049]: > Jan 27 18:01:33 crc kubenswrapper[5049]: I0127 18:01:33.530449 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:33 crc kubenswrapper[5049]: I0127 18:01:33.578881 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:33 crc kubenswrapper[5049]: I0127 18:01:33.645954 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:01:33 crc kubenswrapper[5049]: E0127 18:01:33.646164 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:01:33 crc kubenswrapper[5049]: I0127 18:01:33.775247 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fbrx"] Jan 27 18:01:35 crc kubenswrapper[5049]: I0127 18:01:35.182202 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4fbrx" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="registry-server" containerID="cri-o://09978a183324194e617197fb0671eb93478267dd074960eef079325f3a714a18" gracePeriod=2 Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.192427 5049 generic.go:334] "Generic (PLEG): container finished" podID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerID="09978a183324194e617197fb0671eb93478267dd074960eef079325f3a714a18" exitCode=0 Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.192537 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fbrx" event={"ID":"7df9afd1-fcb4-4e37-8b63-815680a51829","Type":"ContainerDied","Data":"09978a183324194e617197fb0671eb93478267dd074960eef079325f3a714a18"} Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.192849 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fbrx" event={"ID":"7df9afd1-fcb4-4e37-8b63-815680a51829","Type":"ContainerDied","Data":"3c405d0379d9cc2e228056313bbc2a5f79ae6c36ab93e681714b22b259d6ce48"} Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.192886 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c405d0379d9cc2e228056313bbc2a5f79ae6c36ab93e681714b22b259d6ce48" Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.211156 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.270374 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv4rl\" (UniqueName: \"kubernetes.io/projected/7df9afd1-fcb4-4e37-8b63-815680a51829-kube-api-access-fv4rl\") pod \"7df9afd1-fcb4-4e37-8b63-815680a51829\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.270536 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-catalog-content\") pod \"7df9afd1-fcb4-4e37-8b63-815680a51829\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.270597 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-utilities\") pod \"7df9afd1-fcb4-4e37-8b63-815680a51829\" (UID: \"7df9afd1-fcb4-4e37-8b63-815680a51829\") " Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.271606 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-utilities" (OuterVolumeSpecName: "utilities") pod "7df9afd1-fcb4-4e37-8b63-815680a51829" (UID: "7df9afd1-fcb4-4e37-8b63-815680a51829"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.271799 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.276033 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df9afd1-fcb4-4e37-8b63-815680a51829-kube-api-access-fv4rl" (OuterVolumeSpecName: "kube-api-access-fv4rl") pod "7df9afd1-fcb4-4e37-8b63-815680a51829" (UID: "7df9afd1-fcb4-4e37-8b63-815680a51829"). InnerVolumeSpecName "kube-api-access-fv4rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.373119 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv4rl\" (UniqueName: \"kubernetes.io/projected/7df9afd1-fcb4-4e37-8b63-815680a51829-kube-api-access-fv4rl\") on node \"crc\" DevicePath \"\"" Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.399758 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7df9afd1-fcb4-4e37-8b63-815680a51829" (UID: "7df9afd1-fcb4-4e37-8b63-815680a51829"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:01:36 crc kubenswrapper[5049]: I0127 18:01:36.474744 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df9afd1-fcb4-4e37-8b63-815680a51829-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:01:37 crc kubenswrapper[5049]: I0127 18:01:37.202439 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fbrx" Jan 27 18:01:37 crc kubenswrapper[5049]: I0127 18:01:37.272808 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4fbrx"] Jan 27 18:01:37 crc kubenswrapper[5049]: I0127 18:01:37.283620 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4fbrx"] Jan 27 18:01:37 crc kubenswrapper[5049]: I0127 18:01:37.663775 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" path="/var/lib/kubelet/pods/7df9afd1-fcb4-4e37-8b63-815680a51829/volumes" Jan 27 18:01:46 crc kubenswrapper[5049]: I0127 18:01:46.646423 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:01:46 crc kubenswrapper[5049]: E0127 18:01:46.647315 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:01:59 crc kubenswrapper[5049]: I0127 18:01:59.646644 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:01:59 crc kubenswrapper[5049]: E0127 18:01:59.647584 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:02:11 crc kubenswrapper[5049]: I0127 18:02:11.646835 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:02:11 crc kubenswrapper[5049]: E0127 18:02:11.648002 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:02:26 crc kubenswrapper[5049]: I0127 18:02:26.646436 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:02:26 crc kubenswrapper[5049]: E0127 18:02:26.647456 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:02:41 crc kubenswrapper[5049]: I0127 18:02:41.646419 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:02:41 crc kubenswrapper[5049]: E0127 18:02:41.647346 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:02:56 crc kubenswrapper[5049]: I0127 18:02:56.646996 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:02:56 crc kubenswrapper[5049]: E0127 18:02:56.648124 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:03:08 crc kubenswrapper[5049]: I0127 18:03:08.646564 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:03:08 crc kubenswrapper[5049]: E0127 18:03:08.647896 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:03:21 crc kubenswrapper[5049]: I0127 18:03:21.646491 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:03:21 crc kubenswrapper[5049]: E0127 18:03:21.647938 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:03:34 crc kubenswrapper[5049]: I0127 18:03:34.646760 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:03:34 crc kubenswrapper[5049]: E0127 18:03:34.647541 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:03:45 crc kubenswrapper[5049]: I0127 18:03:45.653739 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:03:45 crc kubenswrapper[5049]: E0127 18:03:45.654624 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:03:57 crc kubenswrapper[5049]: I0127 18:03:57.646804 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:03:57 crc kubenswrapper[5049]: E0127 18:03:57.648128 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:04:11 crc kubenswrapper[5049]: I0127 18:04:11.646740 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:04:11 crc kubenswrapper[5049]: E0127 18:04:11.647624 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:04:22 crc kubenswrapper[5049]: I0127 18:04:22.647102 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:04:22 crc kubenswrapper[5049]: E0127 18:04:22.648223 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:04:35 crc kubenswrapper[5049]: I0127 18:04:35.653566 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:04:35 crc kubenswrapper[5049]: E0127 18:04:35.654748 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:04:50 crc kubenswrapper[5049]: I0127 18:04:50.645694 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:04:50 crc kubenswrapper[5049]: E0127 18:04:50.646647 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:05:03 crc kubenswrapper[5049]: I0127 18:05:03.645579 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:05:03 crc kubenswrapper[5049]: E0127 18:05:03.646393 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:05:14 crc kubenswrapper[5049]: I0127 18:05:14.645629 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:05:14 crc kubenswrapper[5049]: E0127 18:05:14.646360 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:05:16 crc kubenswrapper[5049]: I0127 18:05:16.876308 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jmrg"] Jan 27 18:05:16 crc kubenswrapper[5049]: E0127 18:05:16.877176 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="extract-utilities" Jan 27 18:05:16 crc kubenswrapper[5049]: I0127 18:05:16.877200 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="extract-utilities" Jan 27 18:05:16 crc kubenswrapper[5049]: E0127 18:05:16.877223 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="extract-content" Jan 27 18:05:16 crc kubenswrapper[5049]: I0127 18:05:16.877235 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="extract-content" Jan 27 18:05:16 crc kubenswrapper[5049]: E0127 18:05:16.877263 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="registry-server" Jan 27 18:05:16 crc kubenswrapper[5049]: I0127 18:05:16.877275 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="registry-server" Jan 27 18:05:16 crc kubenswrapper[5049]: I0127 18:05:16.877510 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df9afd1-fcb4-4e37-8b63-815680a51829" containerName="registry-server" Jan 27 18:05:16 crc kubenswrapper[5049]: I0127 18:05:16.879143 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:16 crc kubenswrapper[5049]: I0127 18:05:16.892096 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jmrg"] Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.036367 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-utilities\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.036442 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-catalog-content\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.036532 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br45k\" (UniqueName: \"kubernetes.io/projected/4e930337-4526-487a-98ae-eabb0523cf63-kube-api-access-br45k\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.137470 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br45k\" (UniqueName: \"kubernetes.io/projected/4e930337-4526-487a-98ae-eabb0523cf63-kube-api-access-br45k\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.137569 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-utilities\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.137641 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-catalog-content\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.138126 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-utilities\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.138263 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-catalog-content\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.162810 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br45k\" (UniqueName: \"kubernetes.io/projected/4e930337-4526-487a-98ae-eabb0523cf63-kube-api-access-br45k\") pod \"community-operators-5jmrg\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.198624 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:17 crc kubenswrapper[5049]: I0127 18:05:17.717466 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jmrg"] Jan 27 18:05:18 crc kubenswrapper[5049]: I0127 18:05:18.063901 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e930337-4526-487a-98ae-eabb0523cf63" containerID="3dbaf275404bfc5e3eda86a3ca5a26c25314fdb086e20f1e17449b55e8a77ee7" exitCode=0 Jan 27 18:05:18 crc kubenswrapper[5049]: I0127 18:05:18.064002 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jmrg" event={"ID":"4e930337-4526-487a-98ae-eabb0523cf63","Type":"ContainerDied","Data":"3dbaf275404bfc5e3eda86a3ca5a26c25314fdb086e20f1e17449b55e8a77ee7"} Jan 27 18:05:18 crc kubenswrapper[5049]: I0127 18:05:18.064093 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jmrg" event={"ID":"4e930337-4526-487a-98ae-eabb0523cf63","Type":"ContainerStarted","Data":"29ab9474e3e7c3ca47c09973aad3a976ed33e23c6d73a89c4386a52fc39c4f8a"} Jan 27 18:05:19 crc kubenswrapper[5049]: I0127 18:05:19.070942 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jmrg" event={"ID":"4e930337-4526-487a-98ae-eabb0523cf63","Type":"ContainerStarted","Data":"204fccbfd4629e49609974c54877df4e7493808ab44c139dd1913748e6a51141"} Jan 27 18:05:20 crc kubenswrapper[5049]: I0127 18:05:20.082221 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e930337-4526-487a-98ae-eabb0523cf63" containerID="204fccbfd4629e49609974c54877df4e7493808ab44c139dd1913748e6a51141" exitCode=0 Jan 27 18:05:20 crc kubenswrapper[5049]: I0127 18:05:20.082316 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jmrg" event={"ID":"4e930337-4526-487a-98ae-eabb0523cf63","Type":"ContainerDied","Data":"204fccbfd4629e49609974c54877df4e7493808ab44c139dd1913748e6a51141"} Jan 27 18:05:21 crc kubenswrapper[5049]: I0127 18:05:21.090500 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jmrg" event={"ID":"4e930337-4526-487a-98ae-eabb0523cf63","Type":"ContainerStarted","Data":"948f00b781ee4cff952e80b93373af33d22378b66568b8e14cadfe8e38129688"} Jan 27 18:05:21 crc kubenswrapper[5049]: I0127 18:05:21.107071 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jmrg" podStartSLOduration=2.6608596159999998 podStartE2EDuration="5.107053565s" podCreationTimestamp="2026-01-27 18:05:16 +0000 UTC" firstStartedPulling="2026-01-27 18:05:18.066298201 +0000 UTC m=+4093.165271750" lastFinishedPulling="2026-01-27 18:05:20.51249215 +0000 UTC m=+4095.611465699" observedRunningTime="2026-01-27 18:05:21.106450718 +0000 UTC m=+4096.205424277" watchObservedRunningTime="2026-01-27 18:05:21.107053565 +0000 UTC m=+4096.206027114" Jan 27 18:05:25 crc kubenswrapper[5049]: I0127 18:05:25.646768 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:05:25 crc kubenswrapper[5049]: E0127 18:05:25.647793 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:05:27 crc kubenswrapper[5049]: I0127 18:05:27.199387 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:27 crc kubenswrapper[5049]: I0127 18:05:27.199871 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:27 crc kubenswrapper[5049]: I0127 18:05:27.279547 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:28 crc kubenswrapper[5049]: I0127 18:05:28.191784 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:28 crc kubenswrapper[5049]: I0127 18:05:28.249810 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jmrg"] Jan 27 18:05:30 crc kubenswrapper[5049]: I0127 18:05:30.150527 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5jmrg" podUID="4e930337-4526-487a-98ae-eabb0523cf63" containerName="registry-server" containerID="cri-o://948f00b781ee4cff952e80b93373af33d22378b66568b8e14cadfe8e38129688" gracePeriod=2 Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.163691 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e930337-4526-487a-98ae-eabb0523cf63" containerID="948f00b781ee4cff952e80b93373af33d22378b66568b8e14cadfe8e38129688" exitCode=0 Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.163813 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jmrg" event={"ID":"4e930337-4526-487a-98ae-eabb0523cf63","Type":"ContainerDied","Data":"948f00b781ee4cff952e80b93373af33d22378b66568b8e14cadfe8e38129688"} Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.667230 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.861908 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br45k\" (UniqueName: \"kubernetes.io/projected/4e930337-4526-487a-98ae-eabb0523cf63-kube-api-access-br45k\") pod \"4e930337-4526-487a-98ae-eabb0523cf63\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.862050 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-catalog-content\") pod \"4e930337-4526-487a-98ae-eabb0523cf63\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.862071 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-utilities\") pod \"4e930337-4526-487a-98ae-eabb0523cf63\" (UID: \"4e930337-4526-487a-98ae-eabb0523cf63\") " Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.863365 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-utilities" (OuterVolumeSpecName: "utilities") pod "4e930337-4526-487a-98ae-eabb0523cf63" (UID: "4e930337-4526-487a-98ae-eabb0523cf63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.871747 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e930337-4526-487a-98ae-eabb0523cf63-kube-api-access-br45k" (OuterVolumeSpecName: "kube-api-access-br45k") pod "4e930337-4526-487a-98ae-eabb0523cf63" (UID: "4e930337-4526-487a-98ae-eabb0523cf63"). InnerVolumeSpecName "kube-api-access-br45k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.922762 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e930337-4526-487a-98ae-eabb0523cf63" (UID: "4e930337-4526-487a-98ae-eabb0523cf63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.963371 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br45k\" (UniqueName: \"kubernetes.io/projected/4e930337-4526-487a-98ae-eabb0523cf63-kube-api-access-br45k\") on node \"crc\" DevicePath \"\"" Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.963794 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:05:31 crc kubenswrapper[5049]: I0127 18:05:31.963804 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e930337-4526-487a-98ae-eabb0523cf63-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:05:32 crc kubenswrapper[5049]: I0127 18:05:32.172972 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jmrg" event={"ID":"4e930337-4526-487a-98ae-eabb0523cf63","Type":"ContainerDied","Data":"29ab9474e3e7c3ca47c09973aad3a976ed33e23c6d73a89c4386a52fc39c4f8a"} Jan 27 18:05:32 crc kubenswrapper[5049]: I0127 18:05:32.173049 5049 scope.go:117] "RemoveContainer" containerID="948f00b781ee4cff952e80b93373af33d22378b66568b8e14cadfe8e38129688" Jan 27 18:05:32 crc kubenswrapper[5049]: I0127 18:05:32.173073 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jmrg" Jan 27 18:05:32 crc kubenswrapper[5049]: I0127 18:05:32.197157 5049 scope.go:117] "RemoveContainer" containerID="204fccbfd4629e49609974c54877df4e7493808ab44c139dd1913748e6a51141" Jan 27 18:05:32 crc kubenswrapper[5049]: I0127 18:05:32.222940 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5jmrg"] Jan 27 18:05:32 crc kubenswrapper[5049]: I0127 18:05:32.228610 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5jmrg"] Jan 27 18:05:32 crc kubenswrapper[5049]: I0127 18:05:32.238389 5049 scope.go:117] "RemoveContainer" containerID="3dbaf275404bfc5e3eda86a3ca5a26c25314fdb086e20f1e17449b55e8a77ee7" Jan 27 18:05:33 crc kubenswrapper[5049]: I0127 18:05:33.662242 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e930337-4526-487a-98ae-eabb0523cf63" path="/var/lib/kubelet/pods/4e930337-4526-487a-98ae-eabb0523cf63/volumes" Jan 27 18:05:36 crc kubenswrapper[5049]: I0127 18:05:36.648090 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:05:36 crc kubenswrapper[5049]: E0127 18:05:36.648601 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:05:48 crc kubenswrapper[5049]: I0127 18:05:48.646801 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:05:48 crc kubenswrapper[5049]: E0127 18:05:48.647979 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:06:01 crc kubenswrapper[5049]: I0127 18:06:01.646310 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:06:01 crc kubenswrapper[5049]: E0127 18:06:01.647432 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:06:13 crc kubenswrapper[5049]: I0127 18:06:13.645829 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:06:13 crc kubenswrapper[5049]: E0127 18:06:13.646598 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:06:26 crc kubenswrapper[5049]: I0127 18:06:26.645872 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:06:27 crc kubenswrapper[5049]: I0127 18:06:27.629644 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"054cddf21664b58e6aee3f167f2db961be7254405696ff060cacf6639c60ca27"} Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.242125 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sl5jj"] Jan 27 18:06:41 crc kubenswrapper[5049]: E0127 18:06:41.243228 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e930337-4526-487a-98ae-eabb0523cf63" containerName="extract-utilities" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.243252 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e930337-4526-487a-98ae-eabb0523cf63" containerName="extract-utilities" Jan 27 18:06:41 crc kubenswrapper[5049]: E0127 18:06:41.243288 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e930337-4526-487a-98ae-eabb0523cf63" containerName="registry-server" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.243302 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e930337-4526-487a-98ae-eabb0523cf63" containerName="registry-server" Jan 27 18:06:41 crc kubenswrapper[5049]: E0127 18:06:41.243337 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e930337-4526-487a-98ae-eabb0523cf63" containerName="extract-content" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.243351 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e930337-4526-487a-98ae-eabb0523cf63" containerName="extract-content" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.243620 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e930337-4526-487a-98ae-eabb0523cf63" containerName="registry-server" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.245518 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.282808 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl5jj"] Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.335580 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-catalog-content\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.335788 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zlr\" (UniqueName: \"kubernetes.io/projected/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-kube-api-access-64zlr\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.335995 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-utilities\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.437130 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-utilities\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.437181 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-catalog-content\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.437261 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zlr\" (UniqueName: \"kubernetes.io/projected/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-kube-api-access-64zlr\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.437762 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-utilities\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.437889 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-catalog-content\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.455444 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zlr\" (UniqueName: \"kubernetes.io/projected/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-kube-api-access-64zlr\") pod \"redhat-marketplace-sl5jj\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:41 crc kubenswrapper[5049]: I0127 18:06:41.580203 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:42 crc kubenswrapper[5049]: I0127 18:06:42.070395 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl5jj"] Jan 27 18:06:42 crc kubenswrapper[5049]: I0127 18:06:42.799233 5049 generic.go:334] "Generic (PLEG): container finished" podID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerID="a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259" exitCode=0 Jan 27 18:06:42 crc kubenswrapper[5049]: I0127 18:06:42.799282 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl5jj" event={"ID":"bc27cd12-6c60-49b8-89a2-e44056f4f3f0","Type":"ContainerDied","Data":"a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259"} Jan 27 18:06:42 crc kubenswrapper[5049]: I0127 18:06:42.799328 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl5jj" event={"ID":"bc27cd12-6c60-49b8-89a2-e44056f4f3f0","Type":"ContainerStarted","Data":"1dcb6f8552d6e869fd27bcee1b6adcfeb6ec9c91c966d8edbe5ad18a64220099"} Jan 27 18:06:42 crc kubenswrapper[5049]: I0127 18:06:42.806062 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:06:43 crc kubenswrapper[5049]: I0127 18:06:43.808390 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl5jj" event={"ID":"bc27cd12-6c60-49b8-89a2-e44056f4f3f0","Type":"ContainerStarted","Data":"9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276"} Jan 27 18:06:44 crc kubenswrapper[5049]: I0127 18:06:44.819481 5049 generic.go:334] "Generic (PLEG): container finished" podID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerID="9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276" exitCode=0 Jan 27 18:06:44 crc kubenswrapper[5049]: I0127 18:06:44.819540 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl5jj" event={"ID":"bc27cd12-6c60-49b8-89a2-e44056f4f3f0","Type":"ContainerDied","Data":"9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276"} Jan 27 18:06:45 crc kubenswrapper[5049]: I0127 18:06:45.851836 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl5jj" event={"ID":"bc27cd12-6c60-49b8-89a2-e44056f4f3f0","Type":"ContainerStarted","Data":"535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45"} Jan 27 18:06:45 crc kubenswrapper[5049]: I0127 18:06:45.890938 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sl5jj" podStartSLOduration=2.238630051 podStartE2EDuration="4.890900855s" podCreationTimestamp="2026-01-27 18:06:41 +0000 UTC" firstStartedPulling="2026-01-27 18:06:42.80520559 +0000 UTC m=+4177.904179179" lastFinishedPulling="2026-01-27 18:06:45.457476434 +0000 UTC m=+4180.556449983" observedRunningTime="2026-01-27 18:06:45.884953228 +0000 UTC m=+4180.983926777" watchObservedRunningTime="2026-01-27 18:06:45.890900855 +0000 UTC m=+4180.989874404" Jan 27 18:06:51 crc kubenswrapper[5049]: I0127 18:06:51.580745 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:51 crc kubenswrapper[5049]: I0127 18:06:51.581246 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:51 crc kubenswrapper[5049]: I0127 18:06:51.672171 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:51 crc kubenswrapper[5049]: I0127 18:06:51.956329 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:52 crc kubenswrapper[5049]: I0127 18:06:52.010519 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl5jj"] Jan 27 18:06:53 crc kubenswrapper[5049]: I0127 18:06:53.918973 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sl5jj" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerName="registry-server" containerID="cri-o://535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45" gracePeriod=2 Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.549667 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.669519 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-catalog-content\") pod \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.669604 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-utilities\") pod \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.669653 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64zlr\" (UniqueName: \"kubernetes.io/projected/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-kube-api-access-64zlr\") pod \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\" (UID: \"bc27cd12-6c60-49b8-89a2-e44056f4f3f0\") " Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.670774 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-utilities" (OuterVolumeSpecName: "utilities") pod "bc27cd12-6c60-49b8-89a2-e44056f4f3f0" (UID: "bc27cd12-6c60-49b8-89a2-e44056f4f3f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.677924 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-kube-api-access-64zlr" (OuterVolumeSpecName: "kube-api-access-64zlr") pod "bc27cd12-6c60-49b8-89a2-e44056f4f3f0" (UID: "bc27cd12-6c60-49b8-89a2-e44056f4f3f0"). InnerVolumeSpecName "kube-api-access-64zlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.693089 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc27cd12-6c60-49b8-89a2-e44056f4f3f0" (UID: "bc27cd12-6c60-49b8-89a2-e44056f4f3f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.771833 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.771890 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.771913 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64zlr\" (UniqueName: \"kubernetes.io/projected/bc27cd12-6c60-49b8-89a2-e44056f4f3f0-kube-api-access-64zlr\") on node \"crc\" DevicePath \"\"" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.941123 5049 generic.go:334] "Generic (PLEG): container finished" podID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerID="535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45" exitCode=0 Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.941182 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl5jj" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.941198 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl5jj" event={"ID":"bc27cd12-6c60-49b8-89a2-e44056f4f3f0","Type":"ContainerDied","Data":"535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45"} Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.941249 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl5jj" event={"ID":"bc27cd12-6c60-49b8-89a2-e44056f4f3f0","Type":"ContainerDied","Data":"1dcb6f8552d6e869fd27bcee1b6adcfeb6ec9c91c966d8edbe5ad18a64220099"} Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.941276 5049 scope.go:117] "RemoveContainer" containerID="535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.977019 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl5jj"] Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.977159 5049 scope.go:117] "RemoveContainer" containerID="9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276" Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.984327 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl5jj"] Jan 27 18:06:54 crc kubenswrapper[5049]: I0127 18:06:54.995554 5049 scope.go:117] "RemoveContainer" containerID="a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259" Jan 27 18:06:55 crc kubenswrapper[5049]: I0127 18:06:55.018718 5049 scope.go:117] "RemoveContainer" containerID="535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45" Jan 27 18:06:55 crc kubenswrapper[5049]: E0127 18:06:55.019347 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45\": container with ID starting with 535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45 not found: ID does not exist" containerID="535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45" Jan 27 18:06:55 crc kubenswrapper[5049]: I0127 18:06:55.019395 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45"} err="failed to get container status \"535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45\": rpc error: code = NotFound desc = could not find container \"535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45\": container with ID starting with 535648bba271d7535e2557a423ace7a7fb77f9c2bc060fca5944e45ad1afbf45 not found: ID does not exist" Jan 27 18:06:55 crc kubenswrapper[5049]: I0127 18:06:55.019425 5049 scope.go:117] "RemoveContainer" containerID="9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276" Jan 27 18:06:55 crc kubenswrapper[5049]: E0127 18:06:55.019942 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276\": container with ID starting with 9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276 not found: ID does not exist" containerID="9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276" Jan 27 18:06:55 crc kubenswrapper[5049]: I0127 18:06:55.020005 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276"} err="failed to get container status \"9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276\": rpc error: code = NotFound desc = could not find container \"9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276\": container with ID starting with 9a42fa56787285d110d992e991d2cb88a815fa35f693b6db21f535993a4ea276 not found: ID does not exist" Jan 27 18:06:55 crc kubenswrapper[5049]: I0127 18:06:55.020042 5049 scope.go:117] "RemoveContainer" containerID="a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259" Jan 27 18:06:55 crc kubenswrapper[5049]: E0127 18:06:55.020545 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259\": container with ID starting with a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259 not found: ID does not exist" containerID="a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259" Jan 27 18:06:55 crc kubenswrapper[5049]: I0127 18:06:55.020612 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259"} err="failed to get container status \"a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259\": rpc error: code = NotFound desc = could not find container \"a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259\": container with ID starting with a15990e87d818a826a2fbd6a50b02faa51d3b7a01f9a24f2d0d06283f4875259 not found: ID does not exist" Jan 27 18:06:55 crc kubenswrapper[5049]: I0127 18:06:55.674070 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" path="/var/lib/kubelet/pods/bc27cd12-6c60-49b8-89a2-e44056f4f3f0/volumes" Jan 27 18:07:15 crc kubenswrapper[5049]: I0127 18:07:15.654792 5049 scope.go:117] "RemoveContainer" containerID="1bd16a862b2533104c5dd5a0eedf604ff635738fbe97caf6aeaac7e67bfddbbf" Jan 27 18:08:15 crc kubenswrapper[5049]: I0127 18:08:15.715882 5049 scope.go:117] "RemoveContainer" containerID="986ad1f00c67e4fec32a448eb9590cc9ddc744f987022d7f0cc3d1df25bb111a" Jan 27 18:08:15 crc kubenswrapper[5049]: I0127 18:08:15.752719 5049 scope.go:117] "RemoveContainer" containerID="09978a183324194e617197fb0671eb93478267dd074960eef079325f3a714a18" Jan 27 18:08:47 crc kubenswrapper[5049]: I0127 18:08:47.781969 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:08:47 crc kubenswrapper[5049]: I0127 18:08:47.782619 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:09:17 crc kubenswrapper[5049]: I0127 18:09:17.781870 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:09:17 crc kubenswrapper[5049]: I0127 18:09:17.782863 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[5049]: I0127 18:09:47.781171 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:09:47 crc kubenswrapper[5049]: I0127 18:09:47.781817 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:09:47 crc kubenswrapper[5049]: I0127 18:09:47.781873 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:09:47 crc kubenswrapper[5049]: I0127 18:09:47.782610 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"054cddf21664b58e6aee3f167f2db961be7254405696ff060cacf6639c60ca27"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:09:47 crc kubenswrapper[5049]: I0127 18:09:47.782756 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://054cddf21664b58e6aee3f167f2db961be7254405696ff060cacf6639c60ca27" gracePeriod=600 Jan 27 18:09:48 crc kubenswrapper[5049]: I0127 18:09:48.348933 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="054cddf21664b58e6aee3f167f2db961be7254405696ff060cacf6639c60ca27" exitCode=0 Jan 27 18:09:48 crc kubenswrapper[5049]: I0127 18:09:48.349009 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"054cddf21664b58e6aee3f167f2db961be7254405696ff060cacf6639c60ca27"} Jan 27 18:09:48 crc kubenswrapper[5049]: I0127 18:09:48.349247 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1"} Jan 27 18:09:48 crc kubenswrapper[5049]: I0127 18:09:48.349270 5049 scope.go:117] "RemoveContainer" containerID="25c8b2f112bd6e99f66d34f7300dbeec62cd70149fd929b0c0d455a4d345b1b7" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.607222 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v5rcv"] Jan 27 18:12:09 crc kubenswrapper[5049]: E0127 18:12:09.608711 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerName="extract-content" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.608746 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerName="extract-content" Jan 27 18:12:09 crc kubenswrapper[5049]: E0127 18:12:09.608802 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerName="registry-server" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.608818 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerName="registry-server" Jan 27 18:12:09 crc kubenswrapper[5049]: E0127 18:12:09.608879 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerName="extract-utilities" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.608897 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerName="extract-utilities" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.609496 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc27cd12-6c60-49b8-89a2-e44056f4f3f0" containerName="registry-server" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.613863 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.626065 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v5rcv"] Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.703711 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxsc7\" (UniqueName: \"kubernetes.io/projected/4e3ce452-b759-4798-90f7-53ded84821cf-kube-api-access-mxsc7\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.704151 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-catalog-content\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.704216 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-utilities\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.805175 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxsc7\" (UniqueName: \"kubernetes.io/projected/4e3ce452-b759-4798-90f7-53ded84821cf-kube-api-access-mxsc7\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.805258 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-catalog-content\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.805320 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-utilities\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.806004 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-utilities\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.806866 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-catalog-content\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.837100 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxsc7\" (UniqueName: \"kubernetes.io/projected/4e3ce452-b759-4798-90f7-53ded84821cf-kube-api-access-mxsc7\") pod \"redhat-operators-v5rcv\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:09 crc kubenswrapper[5049]: I0127 18:12:09.934920 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:10 crc kubenswrapper[5049]: I0127 18:12:10.398121 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v5rcv"] Jan 27 18:12:10 crc kubenswrapper[5049]: I0127 18:12:10.500786 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5rcv" event={"ID":"4e3ce452-b759-4798-90f7-53ded84821cf","Type":"ContainerStarted","Data":"dde88e3ba03c29484e069e6eac90da907e47c217678a40cc3b47e7cfb8654b99"} Jan 27 18:12:10 crc kubenswrapper[5049]: I0127 18:12:10.976026 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vwbsw"] Jan 27 18:12:10 crc kubenswrapper[5049]: I0127 18:12:10.977420 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:10 crc kubenswrapper[5049]: I0127 18:12:10.991228 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwbsw"] Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.020739 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-utilities\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.020823 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-catalog-content\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.021012 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5fgj\" (UniqueName: \"kubernetes.io/projected/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-kube-api-access-q5fgj\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.122738 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-utilities\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.123136 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-catalog-content\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.123158 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5fgj\" (UniqueName: \"kubernetes.io/projected/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-kube-api-access-q5fgj\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.123700 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-utilities\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.123711 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-catalog-content\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.148718 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5fgj\" (UniqueName: \"kubernetes.io/projected/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-kube-api-access-q5fgj\") pod \"certified-operators-vwbsw\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.295260 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.526806 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e3ce452-b759-4798-90f7-53ded84821cf" containerID="a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9" exitCode=0 Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.526851 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5rcv" event={"ID":"4e3ce452-b759-4798-90f7-53ded84821cf","Type":"ContainerDied","Data":"a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9"} Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.532209 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:12:11 crc kubenswrapper[5049]: I0127 18:12:11.616420 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vwbsw"] Jan 27 18:12:11 crc kubenswrapper[5049]: W0127 18:12:11.621840 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8e9d2e5_3b00_4a64_93ca_33be4ab36115.slice/crio-0436dd18ceec5eb41fe7d004387fca1939acdb091735c550d5110982a6768e9b WatchSource:0}: Error finding container 0436dd18ceec5eb41fe7d004387fca1939acdb091735c550d5110982a6768e9b: Status 404 returned error can't find the container with id 0436dd18ceec5eb41fe7d004387fca1939acdb091735c550d5110982a6768e9b Jan 27 18:12:12 crc kubenswrapper[5049]: I0127 18:12:12.537956 5049 generic.go:334] "Generic (PLEG): container finished" podID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerID="1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f" exitCode=0 Jan 27 18:12:12 crc kubenswrapper[5049]: I0127 18:12:12.538101 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwbsw" event={"ID":"c8e9d2e5-3b00-4a64-93ca-33be4ab36115","Type":"ContainerDied","Data":"1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f"} Jan 27 18:12:12 crc kubenswrapper[5049]: I0127 18:12:12.538439 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwbsw" event={"ID":"c8e9d2e5-3b00-4a64-93ca-33be4ab36115","Type":"ContainerStarted","Data":"0436dd18ceec5eb41fe7d004387fca1939acdb091735c550d5110982a6768e9b"} Jan 27 18:12:13 crc kubenswrapper[5049]: I0127 18:12:13.548128 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5rcv" event={"ID":"4e3ce452-b759-4798-90f7-53ded84821cf","Type":"ContainerStarted","Data":"785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a"} Jan 27 18:12:13 crc kubenswrapper[5049]: I0127 18:12:13.551617 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwbsw" event={"ID":"c8e9d2e5-3b00-4a64-93ca-33be4ab36115","Type":"ContainerStarted","Data":"b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea"} Jan 27 18:12:14 crc kubenswrapper[5049]: I0127 18:12:14.564203 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e3ce452-b759-4798-90f7-53ded84821cf" containerID="785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a" exitCode=0 Jan 27 18:12:14 crc kubenswrapper[5049]: I0127 18:12:14.564280 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5rcv" event={"ID":"4e3ce452-b759-4798-90f7-53ded84821cf","Type":"ContainerDied","Data":"785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a"} Jan 27 18:12:14 crc kubenswrapper[5049]: I0127 18:12:14.568492 5049 generic.go:334] "Generic (PLEG): container finished" podID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerID="b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea" exitCode=0 Jan 27 18:12:14 crc kubenswrapper[5049]: I0127 18:12:14.568530 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwbsw" event={"ID":"c8e9d2e5-3b00-4a64-93ca-33be4ab36115","Type":"ContainerDied","Data":"b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea"} Jan 27 18:12:16 crc kubenswrapper[5049]: I0127 18:12:16.587635 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5rcv" event={"ID":"4e3ce452-b759-4798-90f7-53ded84821cf","Type":"ContainerStarted","Data":"cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6"} Jan 27 18:12:16 crc kubenswrapper[5049]: I0127 18:12:16.589466 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwbsw" event={"ID":"c8e9d2e5-3b00-4a64-93ca-33be4ab36115","Type":"ContainerStarted","Data":"f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36"} Jan 27 18:12:16 crc kubenswrapper[5049]: I0127 18:12:16.612721 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v5rcv" podStartSLOduration=2.958653887 podStartE2EDuration="7.612701781s" podCreationTimestamp="2026-01-27 18:12:09 +0000 UTC" firstStartedPulling="2026-01-27 18:12:11.532022827 +0000 UTC m=+4506.630996376" lastFinishedPulling="2026-01-27 18:12:16.186070721 +0000 UTC m=+4511.285044270" observedRunningTime="2026-01-27 18:12:16.60727187 +0000 UTC m=+4511.706245499" watchObservedRunningTime="2026-01-27 18:12:16.612701781 +0000 UTC m=+4511.711675330" Jan 27 18:12:16 crc kubenswrapper[5049]: I0127 18:12:16.635149 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vwbsw" podStartSLOduration=3.30807532 podStartE2EDuration="6.635129928s" podCreationTimestamp="2026-01-27 18:12:10 +0000 UTC" firstStartedPulling="2026-01-27 18:12:12.54000173 +0000 UTC m=+4507.638975289" lastFinishedPulling="2026-01-27 18:12:15.867056348 +0000 UTC m=+4510.966029897" observedRunningTime="2026-01-27 18:12:16.627772552 +0000 UTC m=+4511.726746141" watchObservedRunningTime="2026-01-27 18:12:16.635129928 +0000 UTC m=+4511.734103477" Jan 27 18:12:17 crc kubenswrapper[5049]: I0127 18:12:17.782378 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:12:17 crc kubenswrapper[5049]: I0127 18:12:17.782470 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:12:19 crc kubenswrapper[5049]: I0127 18:12:19.935068 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:19 crc kubenswrapper[5049]: I0127 18:12:19.935403 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:20 crc kubenswrapper[5049]: I0127 18:12:20.983545 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v5rcv" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="registry-server" probeResult="failure" output=< Jan 27 18:12:20 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:12:20 crc kubenswrapper[5049]: > Jan 27 18:12:21 crc kubenswrapper[5049]: I0127 18:12:21.295647 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:21 crc kubenswrapper[5049]: I0127 18:12:21.295741 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:21 crc kubenswrapper[5049]: I0127 18:12:21.333712 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:21 crc kubenswrapper[5049]: I0127 18:12:21.688057 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:21 crc kubenswrapper[5049]: I0127 18:12:21.731418 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwbsw"] Jan 27 18:12:23 crc kubenswrapper[5049]: I0127 18:12:23.659689 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vwbsw" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerName="registry-server" containerID="cri-o://f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36" gracePeriod=2 Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.147654 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.239657 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-utilities\") pod \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.239898 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5fgj\" (UniqueName: \"kubernetes.io/projected/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-kube-api-access-q5fgj\") pod \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.239958 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-catalog-content\") pod \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\" (UID: \"c8e9d2e5-3b00-4a64-93ca-33be4ab36115\") " Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.240562 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-utilities" (OuterVolumeSpecName: "utilities") pod "c8e9d2e5-3b00-4a64-93ca-33be4ab36115" (UID: "c8e9d2e5-3b00-4a64-93ca-33be4ab36115"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.248440 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-kube-api-access-q5fgj" (OuterVolumeSpecName: "kube-api-access-q5fgj") pod "c8e9d2e5-3b00-4a64-93ca-33be4ab36115" (UID: "c8e9d2e5-3b00-4a64-93ca-33be4ab36115"). InnerVolumeSpecName "kube-api-access-q5fgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.288071 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8e9d2e5-3b00-4a64-93ca-33be4ab36115" (UID: "c8e9d2e5-3b00-4a64-93ca-33be4ab36115"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.341398 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5fgj\" (UniqueName: \"kubernetes.io/projected/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-kube-api-access-q5fgj\") on node \"crc\" DevicePath \"\"" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.341436 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.341450 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e9d2e5-3b00-4a64-93ca-33be4ab36115-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.668993 5049 generic.go:334] "Generic (PLEG): container finished" podID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerID="f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36" exitCode=0 Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.669056 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwbsw" event={"ID":"c8e9d2e5-3b00-4a64-93ca-33be4ab36115","Type":"ContainerDied","Data":"f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36"} Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.669102 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vwbsw" event={"ID":"c8e9d2e5-3b00-4a64-93ca-33be4ab36115","Type":"ContainerDied","Data":"0436dd18ceec5eb41fe7d004387fca1939acdb091735c550d5110982a6768e9b"} Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.669123 5049 scope.go:117] "RemoveContainer" containerID="f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.669358 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vwbsw" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.694627 5049 scope.go:117] "RemoveContainer" containerID="b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.699518 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vwbsw"] Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.707127 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vwbsw"] Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.722422 5049 scope.go:117] "RemoveContainer" containerID="1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.740140 5049 scope.go:117] "RemoveContainer" containerID="f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36" Jan 27 18:12:24 crc kubenswrapper[5049]: E0127 18:12:24.740772 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36\": container with ID starting with f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36 not found: ID does not exist" containerID="f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.740824 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36"} err="failed to get container status \"f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36\": rpc error: code = NotFound desc = could not find container \"f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36\": container with ID starting with f3b7cfbe0e7d771a51a771037bd8fea92170c7920c38ccdfa467ee54e85eaa36 not found: ID does not exist" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.740857 5049 scope.go:117] "RemoveContainer" containerID="b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea" Jan 27 18:12:24 crc kubenswrapper[5049]: E0127 18:12:24.741328 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea\": container with ID starting with b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea not found: ID does not exist" containerID="b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.741375 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea"} err="failed to get container status \"b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea\": rpc error: code = NotFound desc = could not find container \"b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea\": container with ID starting with b72f5473b4e16a6c61715a7a0e3fcefbfb830458c813204e91b724f7b6210dea not found: ID does not exist" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.741391 5049 scope.go:117] "RemoveContainer" containerID="1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f" Jan 27 18:12:24 crc kubenswrapper[5049]: E0127 18:12:24.741854 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f\": container with ID starting with 1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f not found: ID does not exist" containerID="1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f" Jan 27 18:12:24 crc kubenswrapper[5049]: I0127 18:12:24.741899 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f"} err="failed to get container status \"1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f\": rpc error: code = NotFound desc = could not find container \"1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f\": container with ID starting with 1b56962d33ca9637e75bb980cb8d374981ed1f3bec56d33d8e0339890197c29f not found: ID does not exist" Jan 27 18:12:25 crc kubenswrapper[5049]: I0127 18:12:25.659363 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" path="/var/lib/kubelet/pods/c8e9d2e5-3b00-4a64-93ca-33be4ab36115/volumes" Jan 27 18:12:30 crc kubenswrapper[5049]: I0127 18:12:29.999566 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:30 crc kubenswrapper[5049]: I0127 18:12:30.077912 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:30 crc kubenswrapper[5049]: I0127 18:12:30.898843 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v5rcv"] Jan 27 18:12:31 crc kubenswrapper[5049]: I0127 18:12:31.746600 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v5rcv" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="registry-server" containerID="cri-o://cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6" gracePeriod=2 Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.203365 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.371160 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxsc7\" (UniqueName: \"kubernetes.io/projected/4e3ce452-b759-4798-90f7-53ded84821cf-kube-api-access-mxsc7\") pod \"4e3ce452-b759-4798-90f7-53ded84821cf\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.371356 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-catalog-content\") pod \"4e3ce452-b759-4798-90f7-53ded84821cf\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.371875 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-utilities\") pod \"4e3ce452-b759-4798-90f7-53ded84821cf\" (UID: \"4e3ce452-b759-4798-90f7-53ded84821cf\") " Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.372686 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-utilities" (OuterVolumeSpecName: "utilities") pod "4e3ce452-b759-4798-90f7-53ded84821cf" (UID: "4e3ce452-b759-4798-90f7-53ded84821cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.379424 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3ce452-b759-4798-90f7-53ded84821cf-kube-api-access-mxsc7" (OuterVolumeSpecName: "kube-api-access-mxsc7") pod "4e3ce452-b759-4798-90f7-53ded84821cf" (UID: "4e3ce452-b759-4798-90f7-53ded84821cf"). InnerVolumeSpecName "kube-api-access-mxsc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.474378 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxsc7\" (UniqueName: \"kubernetes.io/projected/4e3ce452-b759-4798-90f7-53ded84821cf-kube-api-access-mxsc7\") on node \"crc\" DevicePath \"\"" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.474428 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.549906 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e3ce452-b759-4798-90f7-53ded84821cf" (UID: "4e3ce452-b759-4798-90f7-53ded84821cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.575500 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e3ce452-b759-4798-90f7-53ded84821cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.756724 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e3ce452-b759-4798-90f7-53ded84821cf" containerID="cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6" exitCode=0 Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.756777 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5rcv" event={"ID":"4e3ce452-b759-4798-90f7-53ded84821cf","Type":"ContainerDied","Data":"cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6"} Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.756800 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v5rcv" event={"ID":"4e3ce452-b759-4798-90f7-53ded84821cf","Type":"ContainerDied","Data":"dde88e3ba03c29484e069e6eac90da907e47c217678a40cc3b47e7cfb8654b99"} Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.756818 5049 scope.go:117] "RemoveContainer" containerID="cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.756829 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v5rcv" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.794895 5049 scope.go:117] "RemoveContainer" containerID="785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.812739 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v5rcv"] Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.814783 5049 scope.go:117] "RemoveContainer" containerID="a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.826961 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v5rcv"] Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.847698 5049 scope.go:117] "RemoveContainer" containerID="cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6" Jan 27 18:12:32 crc kubenswrapper[5049]: E0127 18:12:32.848165 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6\": container with ID starting with cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6 not found: ID does not exist" containerID="cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.848206 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6"} err="failed to get container status \"cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6\": rpc error: code = NotFound desc = could not find container \"cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6\": container with ID starting with cbfd71162995daf5e7e69e47c4f4319251cfe15b4822a4ea3886cc03bef500d6 not found: ID does not exist" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.848233 5049 scope.go:117] "RemoveContainer" containerID="785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a" Jan 27 18:12:32 crc kubenswrapper[5049]: E0127 18:12:32.848515 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a\": container with ID starting with 785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a not found: ID does not exist" containerID="785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.848571 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a"} err="failed to get container status \"785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a\": rpc error: code = NotFound desc = could not find container \"785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a\": container with ID starting with 785114b80d4693e7fdb3550e1987bdb2c7a071f81677c6957d05670ce594d87a not found: ID does not exist" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.848600 5049 scope.go:117] "RemoveContainer" containerID="a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9" Jan 27 18:12:32 crc kubenswrapper[5049]: E0127 18:12:32.849011 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9\": container with ID starting with a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9 not found: ID does not exist" containerID="a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9" Jan 27 18:12:32 crc kubenswrapper[5049]: I0127 18:12:32.849058 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9"} err="failed to get container status \"a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9\": rpc error: code = NotFound desc = could not find container \"a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9\": container with ID starting with a641d10f9c38faec6f50e85861bad92352028a0d70560dadea9aa08016f2a6e9 not found: ID does not exist" Jan 27 18:12:33 crc kubenswrapper[5049]: I0127 18:12:33.661939 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" path="/var/lib/kubelet/pods/4e3ce452-b759-4798-90f7-53ded84821cf/volumes" Jan 27 18:12:47 crc kubenswrapper[5049]: I0127 18:12:47.781728 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:12:47 crc kubenswrapper[5049]: I0127 18:12:47.782299 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:13:17 crc kubenswrapper[5049]: I0127 18:13:17.781348 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:13:17 crc kubenswrapper[5049]: I0127 18:13:17.782107 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:13:17 crc kubenswrapper[5049]: I0127 18:13:17.782192 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:13:17 crc kubenswrapper[5049]: I0127 18:13:17.783300 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:13:17 crc kubenswrapper[5049]: I0127 18:13:17.783401 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" gracePeriod=600 Jan 27 18:13:17 crc kubenswrapper[5049]: E0127 18:13:17.913575 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:13:18 crc kubenswrapper[5049]: I0127 18:13:18.161961 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" exitCode=0 Jan 27 18:13:18 crc kubenswrapper[5049]: I0127 18:13:18.162007 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1"} Jan 27 18:13:18 crc kubenswrapper[5049]: I0127 18:13:18.162529 5049 scope.go:117] "RemoveContainer" containerID="054cddf21664b58e6aee3f167f2db961be7254405696ff060cacf6639c60ca27" Jan 27 18:13:18 crc kubenswrapper[5049]: I0127 18:13:18.163558 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:13:18 crc kubenswrapper[5049]: E0127 18:13:18.164072 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:13:32 crc kubenswrapper[5049]: I0127 18:13:32.646539 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:13:32 crc kubenswrapper[5049]: E0127 18:13:32.647582 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:13:44 crc kubenswrapper[5049]: I0127 18:13:44.646483 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:13:44 crc kubenswrapper[5049]: E0127 18:13:44.647343 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:13:58 crc kubenswrapper[5049]: I0127 18:13:58.646311 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:13:58 crc kubenswrapper[5049]: E0127 18:13:58.647383 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:14:12 crc kubenswrapper[5049]: I0127 18:14:12.646511 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:14:12 crc kubenswrapper[5049]: E0127 18:14:12.647547 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:14:17 crc kubenswrapper[5049]: I0127 18:14:17.980192 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-hrj6g"] Jan 27 18:14:17 crc kubenswrapper[5049]: I0127 18:14:17.988909 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-hrj6g"] Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.101691 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-5tkj6"] Jan 27 18:14:18 crc kubenswrapper[5049]: E0127 18:14:18.101996 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="registry-server" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102011 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="registry-server" Jan 27 18:14:18 crc kubenswrapper[5049]: E0127 18:14:18.102027 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerName="registry-server" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102036 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerName="registry-server" Jan 27 18:14:18 crc kubenswrapper[5049]: E0127 18:14:18.102050 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="extract-utilities" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102058 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="extract-utilities" Jan 27 18:14:18 crc kubenswrapper[5049]: E0127 18:14:18.102079 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerName="extract-content" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102086 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerName="extract-content" Jan 27 18:14:18 crc kubenswrapper[5049]: E0127 18:14:18.102101 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerName="extract-utilities" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102109 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerName="extract-utilities" Jan 27 18:14:18 crc kubenswrapper[5049]: E0127 18:14:18.102127 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="extract-content" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102135 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="extract-content" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102292 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3ce452-b759-4798-90f7-53ded84821cf" containerName="registry-server" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102312 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e9d2e5-3b00-4a64-93ca-33be4ab36115" containerName="registry-server" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.102826 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.108072 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.108265 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.108334 5049 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-665z9" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.108652 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.130777 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5tkj6"] Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.192937 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njwqp\" (UniqueName: \"kubernetes.io/projected/a5523c7e-8812-4023-a388-bfb5edd2a481-kube-api-access-njwqp\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.192979 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a5523c7e-8812-4023-a388-bfb5edd2a481-node-mnt\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.193004 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a5523c7e-8812-4023-a388-bfb5edd2a481-crc-storage\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.294741 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njwqp\" (UniqueName: \"kubernetes.io/projected/a5523c7e-8812-4023-a388-bfb5edd2a481-kube-api-access-njwqp\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.294807 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a5523c7e-8812-4023-a388-bfb5edd2a481-node-mnt\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.294842 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a5523c7e-8812-4023-a388-bfb5edd2a481-crc-storage\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.295811 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a5523c7e-8812-4023-a388-bfb5edd2a481-node-mnt\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.296283 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a5523c7e-8812-4023-a388-bfb5edd2a481-crc-storage\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.330598 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njwqp\" (UniqueName: \"kubernetes.io/projected/a5523c7e-8812-4023-a388-bfb5edd2a481-kube-api-access-njwqp\") pod \"crc-storage-crc-5tkj6\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.440838 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:18 crc kubenswrapper[5049]: I0127 18:14:18.864256 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5tkj6"] Jan 27 18:14:19 crc kubenswrapper[5049]: I0127 18:14:19.657632 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="029840ba-bdf3-4afb-a8d7-93c86d641dd9" path="/var/lib/kubelet/pods/029840ba-bdf3-4afb-a8d7-93c86d641dd9/volumes" Jan 27 18:14:19 crc kubenswrapper[5049]: I0127 18:14:19.713178 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5tkj6" event={"ID":"a5523c7e-8812-4023-a388-bfb5edd2a481","Type":"ContainerStarted","Data":"8a35eebfb2a233bda976b5ad052dfb82668f1e022dbeb49b634ef5f93561ee61"} Jan 27 18:14:19 crc kubenswrapper[5049]: I0127 18:14:19.713366 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5tkj6" event={"ID":"a5523c7e-8812-4023-a388-bfb5edd2a481","Type":"ContainerStarted","Data":"c1cba7f9fc6525e7afce7884563974df090332f4805d9893a44e3fc73c2b50f2"} Jan 27 18:14:19 crc kubenswrapper[5049]: I0127 18:14:19.738293 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-5tkj6" podStartSLOduration=1.216553382 podStartE2EDuration="1.738277068s" podCreationTimestamp="2026-01-27 18:14:18 +0000 UTC" firstStartedPulling="2026-01-27 18:14:18.872837709 +0000 UTC m=+4633.971811268" lastFinishedPulling="2026-01-27 18:14:19.394561395 +0000 UTC m=+4634.493534954" observedRunningTime="2026-01-27 18:14:19.731869219 +0000 UTC m=+4634.830842768" watchObservedRunningTime="2026-01-27 18:14:19.738277068 +0000 UTC m=+4634.837250617" Jan 27 18:14:20 crc kubenswrapper[5049]: I0127 18:14:20.723666 5049 generic.go:334] "Generic (PLEG): container finished" podID="a5523c7e-8812-4023-a388-bfb5edd2a481" containerID="8a35eebfb2a233bda976b5ad052dfb82668f1e022dbeb49b634ef5f93561ee61" exitCode=0 Jan 27 18:14:20 crc kubenswrapper[5049]: I0127 18:14:20.723748 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5tkj6" event={"ID":"a5523c7e-8812-4023-a388-bfb5edd2a481","Type":"ContainerDied","Data":"8a35eebfb2a233bda976b5ad052dfb82668f1e022dbeb49b634ef5f93561ee61"} Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.053581 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.247626 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a5523c7e-8812-4023-a388-bfb5edd2a481-node-mnt\") pod \"a5523c7e-8812-4023-a388-bfb5edd2a481\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.247887 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njwqp\" (UniqueName: \"kubernetes.io/projected/a5523c7e-8812-4023-a388-bfb5edd2a481-kube-api-access-njwqp\") pod \"a5523c7e-8812-4023-a388-bfb5edd2a481\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.247988 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a5523c7e-8812-4023-a388-bfb5edd2a481-crc-storage\") pod \"a5523c7e-8812-4023-a388-bfb5edd2a481\" (UID: \"a5523c7e-8812-4023-a388-bfb5edd2a481\") " Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.247962 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5523c7e-8812-4023-a388-bfb5edd2a481-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "a5523c7e-8812-4023-a388-bfb5edd2a481" (UID: "a5523c7e-8812-4023-a388-bfb5edd2a481"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.248344 5049 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/a5523c7e-8812-4023-a388-bfb5edd2a481-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.260359 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5523c7e-8812-4023-a388-bfb5edd2a481-kube-api-access-njwqp" (OuterVolumeSpecName: "kube-api-access-njwqp") pod "a5523c7e-8812-4023-a388-bfb5edd2a481" (UID: "a5523c7e-8812-4023-a388-bfb5edd2a481"). InnerVolumeSpecName "kube-api-access-njwqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.280643 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5523c7e-8812-4023-a388-bfb5edd2a481-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "a5523c7e-8812-4023-a388-bfb5edd2a481" (UID: "a5523c7e-8812-4023-a388-bfb5edd2a481"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.349439 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njwqp\" (UniqueName: \"kubernetes.io/projected/a5523c7e-8812-4023-a388-bfb5edd2a481-kube-api-access-njwqp\") on node \"crc\" DevicePath \"\"" Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.349468 5049 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/a5523c7e-8812-4023-a388-bfb5edd2a481-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.745566 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5tkj6" event={"ID":"a5523c7e-8812-4023-a388-bfb5edd2a481","Type":"ContainerDied","Data":"c1cba7f9fc6525e7afce7884563974df090332f4805d9893a44e3fc73c2b50f2"} Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.745623 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1cba7f9fc6525e7afce7884563974df090332f4805d9893a44e3fc73c2b50f2" Jan 27 18:14:22 crc kubenswrapper[5049]: I0127 18:14:22.746195 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5tkj6" Jan 27 18:14:23 crc kubenswrapper[5049]: I0127 18:14:23.962546 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-5tkj6"] Jan 27 18:14:23 crc kubenswrapper[5049]: I0127 18:14:23.976168 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-5tkj6"] Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.067054 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-gs2gj"] Jan 27 18:14:24 crc kubenswrapper[5049]: E0127 18:14:24.067642 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5523c7e-8812-4023-a388-bfb5edd2a481" containerName="storage" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.067703 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5523c7e-8812-4023-a388-bfb5edd2a481" containerName="storage" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.067956 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5523c7e-8812-4023-a388-bfb5edd2a481" containerName="storage" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.068719 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.070421 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.071145 5049 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-665z9" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.072720 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.072764 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.132371 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gs2gj"] Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.141282 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzfqd\" (UniqueName: \"kubernetes.io/projected/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-kube-api-access-tzfqd\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.141611 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-node-mnt\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.141734 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-crc-storage\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.242707 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzfqd\" (UniqueName: \"kubernetes.io/projected/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-kube-api-access-tzfqd\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.242790 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-node-mnt\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.242815 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-crc-storage\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.243346 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-node-mnt\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.245065 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-crc-storage\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.262951 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzfqd\" (UniqueName: \"kubernetes.io/projected/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-kube-api-access-tzfqd\") pod \"crc-storage-crc-gs2gj\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.441012 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:24 crc kubenswrapper[5049]: I0127 18:14:24.664063 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-gs2gj"] Jan 27 18:14:25 crc kubenswrapper[5049]: I0127 18:14:25.663891 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5523c7e-8812-4023-a388-bfb5edd2a481" path="/var/lib/kubelet/pods/a5523c7e-8812-4023-a388-bfb5edd2a481/volumes" Jan 27 18:14:25 crc kubenswrapper[5049]: I0127 18:14:25.772626 5049 generic.go:334] "Generic (PLEG): container finished" podID="9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c" containerID="83202efe2d08a82d727a3aa97f676305308c1f6bb86d698b3581e0d62a0cce4e" exitCode=0 Jan 27 18:14:25 crc kubenswrapper[5049]: I0127 18:14:25.772665 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gs2gj" event={"ID":"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c","Type":"ContainerDied","Data":"83202efe2d08a82d727a3aa97f676305308c1f6bb86d698b3581e0d62a0cce4e"} Jan 27 18:14:25 crc kubenswrapper[5049]: I0127 18:14:25.772721 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gs2gj" event={"ID":"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c","Type":"ContainerStarted","Data":"dd02929c3bbed24fa9fa85dc319dfff8b336065c02f406ee60994f6d93f71b66"} Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.139790 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.284320 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-node-mnt\") pod \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.284456 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzfqd\" (UniqueName: \"kubernetes.io/projected/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-kube-api-access-tzfqd\") pod \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.284508 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c" (UID: "9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.284543 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-crc-storage\") pod \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\" (UID: \"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c\") " Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.284840 5049 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.290663 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-kube-api-access-tzfqd" (OuterVolumeSpecName: "kube-api-access-tzfqd") pod "9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c" (UID: "9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c"). InnerVolumeSpecName "kube-api-access-tzfqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.319709 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c" (UID: "9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.386470 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzfqd\" (UniqueName: \"kubernetes.io/projected/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-kube-api-access-tzfqd\") on node \"crc\" DevicePath \"\"" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.386519 5049 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.646750 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:14:27 crc kubenswrapper[5049]: E0127 18:14:27.647608 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.794764 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-gs2gj" event={"ID":"9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c","Type":"ContainerDied","Data":"dd02929c3bbed24fa9fa85dc319dfff8b336065c02f406ee60994f6d93f71b66"} Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.794807 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd02929c3bbed24fa9fa85dc319dfff8b336065c02f406ee60994f6d93f71b66" Jan 27 18:14:27 crc kubenswrapper[5049]: I0127 18:14:27.794887 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-gs2gj" Jan 27 18:14:42 crc kubenswrapper[5049]: I0127 18:14:42.646309 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:14:42 crc kubenswrapper[5049]: E0127 18:14:42.647480 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:14:55 crc kubenswrapper[5049]: I0127 18:14:55.654022 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:14:55 crc kubenswrapper[5049]: E0127 18:14:55.655278 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.221173 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs"] Jan 27 18:15:00 crc kubenswrapper[5049]: E0127 18:15:00.223310 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c" containerName="storage" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.223439 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c" containerName="storage" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.223802 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f78b6c2-a30a-4aa1-bb21-e5dd04146f3c" containerName="storage" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.224721 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.236037 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs"] Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.236683 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.236864 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.254014 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/640d350d-23ee-47b5-bdd1-404f1d26226e-config-volume\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.254218 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/640d350d-23ee-47b5-bdd1-404f1d26226e-secret-volume\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.254338 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwz8d\" (UniqueName: \"kubernetes.io/projected/640d350d-23ee-47b5-bdd1-404f1d26226e-kube-api-access-qwz8d\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.355204 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/640d350d-23ee-47b5-bdd1-404f1d26226e-secret-volume\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.355506 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwz8d\" (UniqueName: \"kubernetes.io/projected/640d350d-23ee-47b5-bdd1-404f1d26226e-kube-api-access-qwz8d\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.355629 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/640d350d-23ee-47b5-bdd1-404f1d26226e-config-volume\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.356542 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/640d350d-23ee-47b5-bdd1-404f1d26226e-config-volume\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.361444 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/640d350d-23ee-47b5-bdd1-404f1d26226e-secret-volume\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.373354 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwz8d\" (UniqueName: \"kubernetes.io/projected/640d350d-23ee-47b5-bdd1-404f1d26226e-kube-api-access-qwz8d\") pod \"collect-profiles-29492295-swqqs\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.550870 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:00 crc kubenswrapper[5049]: I0127 18:15:00.998308 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs"] Jan 27 18:15:01 crc kubenswrapper[5049]: I0127 18:15:01.077582 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" event={"ID":"640d350d-23ee-47b5-bdd1-404f1d26226e","Type":"ContainerStarted","Data":"f3a0844dc4ab0093699b0bc299ae53a0eb5fe25385738fe0ef261fd3eab8aac0"} Jan 27 18:15:02 crc kubenswrapper[5049]: I0127 18:15:02.087247 5049 generic.go:334] "Generic (PLEG): container finished" podID="640d350d-23ee-47b5-bdd1-404f1d26226e" containerID="acb8dc8e60eaf75f70e413e3cf1bb75d70fa7c496d0859e16c2969c8d456fb0f" exitCode=0 Jan 27 18:15:02 crc kubenswrapper[5049]: I0127 18:15:02.087611 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" event={"ID":"640d350d-23ee-47b5-bdd1-404f1d26226e","Type":"ContainerDied","Data":"acb8dc8e60eaf75f70e413e3cf1bb75d70fa7c496d0859e16c2969c8d456fb0f"} Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.418820 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.504764 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/640d350d-23ee-47b5-bdd1-404f1d26226e-secret-volume\") pod \"640d350d-23ee-47b5-bdd1-404f1d26226e\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.504853 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/640d350d-23ee-47b5-bdd1-404f1d26226e-config-volume\") pod \"640d350d-23ee-47b5-bdd1-404f1d26226e\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.504896 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwz8d\" (UniqueName: \"kubernetes.io/projected/640d350d-23ee-47b5-bdd1-404f1d26226e-kube-api-access-qwz8d\") pod \"640d350d-23ee-47b5-bdd1-404f1d26226e\" (UID: \"640d350d-23ee-47b5-bdd1-404f1d26226e\") " Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.505790 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/640d350d-23ee-47b5-bdd1-404f1d26226e-config-volume" (OuterVolumeSpecName: "config-volume") pod "640d350d-23ee-47b5-bdd1-404f1d26226e" (UID: "640d350d-23ee-47b5-bdd1-404f1d26226e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.511522 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/640d350d-23ee-47b5-bdd1-404f1d26226e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "640d350d-23ee-47b5-bdd1-404f1d26226e" (UID: "640d350d-23ee-47b5-bdd1-404f1d26226e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.511608 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/640d350d-23ee-47b5-bdd1-404f1d26226e-kube-api-access-qwz8d" (OuterVolumeSpecName: "kube-api-access-qwz8d") pod "640d350d-23ee-47b5-bdd1-404f1d26226e" (UID: "640d350d-23ee-47b5-bdd1-404f1d26226e"). InnerVolumeSpecName "kube-api-access-qwz8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.606368 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/640d350d-23ee-47b5-bdd1-404f1d26226e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.606408 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/640d350d-23ee-47b5-bdd1-404f1d26226e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:03 crc kubenswrapper[5049]: I0127 18:15:03.606422 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwz8d\" (UniqueName: \"kubernetes.io/projected/640d350d-23ee-47b5-bdd1-404f1d26226e-kube-api-access-qwz8d\") on node \"crc\" DevicePath \"\"" Jan 27 18:15:04 crc kubenswrapper[5049]: I0127 18:15:04.117176 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" event={"ID":"640d350d-23ee-47b5-bdd1-404f1d26226e","Type":"ContainerDied","Data":"f3a0844dc4ab0093699b0bc299ae53a0eb5fe25385738fe0ef261fd3eab8aac0"} Jan 27 18:15:04 crc kubenswrapper[5049]: I0127 18:15:04.117219 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a0844dc4ab0093699b0bc299ae53a0eb5fe25385738fe0ef261fd3eab8aac0" Jan 27 18:15:04 crc kubenswrapper[5049]: I0127 18:15:04.117233 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs" Jan 27 18:15:04 crc kubenswrapper[5049]: I0127 18:15:04.507305 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj"] Jan 27 18:15:04 crc kubenswrapper[5049]: I0127 18:15:04.512750 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492250-dccrj"] Jan 27 18:15:05 crc kubenswrapper[5049]: I0127 18:15:05.663817 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da79d64c-a115-4a32-a92d-a6f99ad18b93" path="/var/lib/kubelet/pods/da79d64c-a115-4a32-a92d-a6f99ad18b93/volumes" Jan 27 18:15:06 crc kubenswrapper[5049]: I0127 18:15:06.646589 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:15:06 crc kubenswrapper[5049]: E0127 18:15:06.647423 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:15:15 crc kubenswrapper[5049]: I0127 18:15:15.960119 5049 scope.go:117] "RemoveContainer" containerID="5aecb04c1f480a0c188f27c089e3373028b6cf81e6a54f6aec18bcc98fd1b76d" Jan 27 18:15:15 crc kubenswrapper[5049]: I0127 18:15:15.983297 5049 scope.go:117] "RemoveContainer" containerID="882c9fa655df72c653de293af240dc8b0c752c924ee65fdc7ba30e208b3972f0" Jan 27 18:15:20 crc kubenswrapper[5049]: I0127 18:15:20.646792 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:15:20 crc kubenswrapper[5049]: E0127 18:15:20.647585 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:15:34 crc kubenswrapper[5049]: I0127 18:15:34.646452 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:15:34 crc kubenswrapper[5049]: E0127 18:15:34.647456 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:15:48 crc kubenswrapper[5049]: I0127 18:15:48.646883 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:15:48 crc kubenswrapper[5049]: E0127 18:15:48.647933 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:15:59 crc kubenswrapper[5049]: I0127 18:15:59.646751 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:15:59 crc kubenswrapper[5049]: E0127 18:15:59.647839 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:16:11 crc kubenswrapper[5049]: I0127 18:16:11.646416 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:16:11 crc kubenswrapper[5049]: E0127 18:16:11.647144 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:16:23 crc kubenswrapper[5049]: I0127 18:16:23.646938 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:16:23 crc kubenswrapper[5049]: E0127 18:16:23.647922 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:16:35 crc kubenswrapper[5049]: I0127 18:16:35.649809 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:16:35 crc kubenswrapper[5049]: E0127 18:16:35.650618 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:16:47 crc kubenswrapper[5049]: I0127 18:16:47.647134 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:16:47 crc kubenswrapper[5049]: E0127 18:16:47.648418 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.337447 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q6lf8"] Jan 27 18:16:54 crc kubenswrapper[5049]: E0127 18:16:54.337996 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="640d350d-23ee-47b5-bdd1-404f1d26226e" containerName="collect-profiles" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.338008 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="640d350d-23ee-47b5-bdd1-404f1d26226e" containerName="collect-profiles" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.338152 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="640d350d-23ee-47b5-bdd1-404f1d26226e" containerName="collect-profiles" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.339054 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.355051 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6lf8"] Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.435182 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-catalog-content\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.435241 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffqj\" (UniqueName: \"kubernetes.io/projected/c70d1c56-ab69-4bb5-b7ce-c653b340d434-kube-api-access-9ffqj\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.435295 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-utilities\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.536309 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-utilities\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.536758 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-catalog-content\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.536905 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffqj\" (UniqueName: \"kubernetes.io/projected/c70d1c56-ab69-4bb5-b7ce-c653b340d434-kube-api-access-9ffqj\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.537005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-utilities\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.537153 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-catalog-content\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.556996 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffqj\" (UniqueName: \"kubernetes.io/projected/c70d1c56-ab69-4bb5-b7ce-c653b340d434-kube-api-access-9ffqj\") pod \"redhat-marketplace-q6lf8\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:54 crc kubenswrapper[5049]: I0127 18:16:54.657373 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:16:55 crc kubenswrapper[5049]: I0127 18:16:55.116088 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6lf8"] Jan 27 18:16:56 crc kubenswrapper[5049]: I0127 18:16:56.044103 5049 generic.go:334] "Generic (PLEG): container finished" podID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerID="adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af" exitCode=0 Jan 27 18:16:56 crc kubenswrapper[5049]: I0127 18:16:56.044160 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6lf8" event={"ID":"c70d1c56-ab69-4bb5-b7ce-c653b340d434","Type":"ContainerDied","Data":"adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af"} Jan 27 18:16:56 crc kubenswrapper[5049]: I0127 18:16:56.044551 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6lf8" event={"ID":"c70d1c56-ab69-4bb5-b7ce-c653b340d434","Type":"ContainerStarted","Data":"b13579619b4ae780d6a626b4ddb99ec759c23fd6a3926669a505d4480f275f20"} Jan 27 18:16:57 crc kubenswrapper[5049]: I0127 18:16:57.054946 5049 generic.go:334] "Generic (PLEG): container finished" podID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerID="4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459" exitCode=0 Jan 27 18:16:57 crc kubenswrapper[5049]: I0127 18:16:57.055000 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6lf8" event={"ID":"c70d1c56-ab69-4bb5-b7ce-c653b340d434","Type":"ContainerDied","Data":"4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459"} Jan 27 18:16:58 crc kubenswrapper[5049]: I0127 18:16:58.072201 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6lf8" event={"ID":"c70d1c56-ab69-4bb5-b7ce-c653b340d434","Type":"ContainerStarted","Data":"b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab"} Jan 27 18:16:58 crc kubenswrapper[5049]: I0127 18:16:58.097233 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q6lf8" podStartSLOduration=2.629495978 podStartE2EDuration="4.097216336s" podCreationTimestamp="2026-01-27 18:16:54 +0000 UTC" firstStartedPulling="2026-01-27 18:16:56.046993503 +0000 UTC m=+4791.145967062" lastFinishedPulling="2026-01-27 18:16:57.514713831 +0000 UTC m=+4792.613687420" observedRunningTime="2026-01-27 18:16:58.095008124 +0000 UTC m=+4793.193981693" watchObservedRunningTime="2026-01-27 18:16:58.097216336 +0000 UTC m=+4793.196189885" Jan 27 18:16:58 crc kubenswrapper[5049]: I0127 18:16:58.646629 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:16:58 crc kubenswrapper[5049]: E0127 18:16:58.647146 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:17:04 crc kubenswrapper[5049]: I0127 18:17:04.658390 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:17:04 crc kubenswrapper[5049]: I0127 18:17:04.658790 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:17:04 crc kubenswrapper[5049]: I0127 18:17:04.719926 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:17:05 crc kubenswrapper[5049]: I0127 18:17:05.212590 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:17:05 crc kubenswrapper[5049]: I0127 18:17:05.293846 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6lf8"] Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.153332 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q6lf8" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerName="registry-server" containerID="cri-o://b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab" gracePeriod=2 Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.578783 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.661203 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-utilities\") pod \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.661241 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-catalog-content\") pod \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.661312 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ffqj\" (UniqueName: \"kubernetes.io/projected/c70d1c56-ab69-4bb5-b7ce-c653b340d434-kube-api-access-9ffqj\") pod \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\" (UID: \"c70d1c56-ab69-4bb5-b7ce-c653b340d434\") " Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.662168 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-utilities" (OuterVolumeSpecName: "utilities") pod "c70d1c56-ab69-4bb5-b7ce-c653b340d434" (UID: "c70d1c56-ab69-4bb5-b7ce-c653b340d434"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.667695 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70d1c56-ab69-4bb5-b7ce-c653b340d434-kube-api-access-9ffqj" (OuterVolumeSpecName: "kube-api-access-9ffqj") pod "c70d1c56-ab69-4bb5-b7ce-c653b340d434" (UID: "c70d1c56-ab69-4bb5-b7ce-c653b340d434"). InnerVolumeSpecName "kube-api-access-9ffqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.681821 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c70d1c56-ab69-4bb5-b7ce-c653b340d434" (UID: "c70d1c56-ab69-4bb5-b7ce-c653b340d434"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.763406 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ffqj\" (UniqueName: \"kubernetes.io/projected/c70d1c56-ab69-4bb5-b7ce-c653b340d434-kube-api-access-9ffqj\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.763471 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:07 crc kubenswrapper[5049]: I0127 18:17:07.763496 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70d1c56-ab69-4bb5-b7ce-c653b340d434-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.162351 5049 generic.go:334] "Generic (PLEG): container finished" podID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerID="b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab" exitCode=0 Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.162433 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6lf8" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.162409 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6lf8" event={"ID":"c70d1c56-ab69-4bb5-b7ce-c653b340d434","Type":"ContainerDied","Data":"b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab"} Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.162651 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6lf8" event={"ID":"c70d1c56-ab69-4bb5-b7ce-c653b340d434","Type":"ContainerDied","Data":"b13579619b4ae780d6a626b4ddb99ec759c23fd6a3926669a505d4480f275f20"} Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.162717 5049 scope.go:117] "RemoveContainer" containerID="b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.187851 5049 scope.go:117] "RemoveContainer" containerID="4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.213190 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6lf8"] Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.219040 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6lf8"] Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.233770 5049 scope.go:117] "RemoveContainer" containerID="adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.252511 5049 scope.go:117] "RemoveContainer" containerID="b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab" Jan 27 18:17:08 crc kubenswrapper[5049]: E0127 18:17:08.252968 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab\": container with ID starting with b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab not found: ID does not exist" containerID="b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.253017 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab"} err="failed to get container status \"b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab\": rpc error: code = NotFound desc = could not find container \"b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab\": container with ID starting with b17f1c5295669b2718220cf3f3da6d77b52ef24494f5cad52b093bbbf3f482ab not found: ID does not exist" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.253052 5049 scope.go:117] "RemoveContainer" containerID="4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459" Jan 27 18:17:08 crc kubenswrapper[5049]: E0127 18:17:08.253526 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459\": container with ID starting with 4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459 not found: ID does not exist" containerID="4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.253605 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459"} err="failed to get container status \"4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459\": rpc error: code = NotFound desc = could not find container \"4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459\": container with ID starting with 4d00092f2333fb7afdbd993f0290525f90238b30372cf073ff5e13a5701a1459 not found: ID does not exist" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.253654 5049 scope.go:117] "RemoveContainer" containerID="adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af" Jan 27 18:17:08 crc kubenswrapper[5049]: E0127 18:17:08.254071 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af\": container with ID starting with adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af not found: ID does not exist" containerID="adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af" Jan 27 18:17:08 crc kubenswrapper[5049]: I0127 18:17:08.254111 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af"} err="failed to get container status \"adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af\": rpc error: code = NotFound desc = could not find container \"adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af\": container with ID starting with adb97b77104f46411a72a62a9843f5485195251dfd8b9e61f01c0f4173ef58af not found: ID does not exist" Jan 27 18:17:09 crc kubenswrapper[5049]: I0127 18:17:09.659326 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" path="/var/lib/kubelet/pods/c70d1c56-ab69-4bb5-b7ce-c653b340d434/volumes" Jan 27 18:17:13 crc kubenswrapper[5049]: I0127 18:17:13.646663 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:17:13 crc kubenswrapper[5049]: E0127 18:17:13.647665 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:17:25 crc kubenswrapper[5049]: I0127 18:17:25.656813 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:17:25 crc kubenswrapper[5049]: E0127 18:17:25.658356 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:17:38 crc kubenswrapper[5049]: I0127 18:17:38.645558 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:17:38 crc kubenswrapper[5049]: E0127 18:17:38.646271 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.891107 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-hfw4q"] Jan 27 18:17:39 crc kubenswrapper[5049]: E0127 18:17:39.891425 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerName="extract-content" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.891440 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerName="extract-content" Jan 27 18:17:39 crc kubenswrapper[5049]: E0127 18:17:39.891456 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerName="registry-server" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.891463 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerName="registry-server" Jan 27 18:17:39 crc kubenswrapper[5049]: E0127 18:17:39.891486 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerName="extract-utilities" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.891494 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerName="extract-utilities" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.892121 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70d1c56-ab69-4bb5-b7ce-c653b340d434" containerName="registry-server" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.895612 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.906629 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-wsxw2" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.906756 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.907089 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.907129 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.907213 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.930049 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-hfw4q"] Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.975559 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-config\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.975864 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rktv7\" (UniqueName: \"kubernetes.io/projected/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-kube-api-access-rktv7\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:39 crc kubenswrapper[5049]: I0127 18:17:39.975984 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.077414 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rktv7\" (UniqueName: \"kubernetes.io/projected/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-kube-api-access-rktv7\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.077757 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.077949 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-config\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.078790 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.078797 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-config\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.103611 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rktv7\" (UniqueName: \"kubernetes.io/projected/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-kube-api-access-rktv7\") pod \"dnsmasq-dns-5d7b5456f5-hfw4q\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.192119 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5gwkx"] Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.194094 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.218540 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5gwkx"] Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.234633 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.282261 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.282376 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-config\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.282438 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq82c\" (UniqueName: \"kubernetes.io/projected/235d59dd-01cd-4eca-ba15-fca1e9d9241f-kube-api-access-qq82c\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.384014 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.384089 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-config\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.384135 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq82c\" (UniqueName: \"kubernetes.io/projected/235d59dd-01cd-4eca-ba15-fca1e9d9241f-kube-api-access-qq82c\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.385427 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.389341 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-config\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.405489 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq82c\" (UniqueName: \"kubernetes.io/projected/235d59dd-01cd-4eca-ba15-fca1e9d9241f-kube-api-access-qq82c\") pod \"dnsmasq-dns-98ddfc8f-5gwkx\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.509779 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.720995 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5gwkx"] Jan 27 18:17:40 crc kubenswrapper[5049]: I0127 18:17:40.744492 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-hfw4q"] Jan 27 18:17:40 crc kubenswrapper[5049]: W0127 18:17:40.749876 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee3b5ce6_bceb_4f20_aaaa_7f92bc3cbaa3.slice/crio-7f8df0ad8365c99fb62809180d5e9668483e61f815cc9511351339dd67ad917d WatchSource:0}: Error finding container 7f8df0ad8365c99fb62809180d5e9668483e61f815cc9511351339dd67ad917d: Status 404 returned error can't find the container with id 7f8df0ad8365c99fb62809180d5e9668483e61f815cc9511351339dd67ad917d Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.044739 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.046131 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.047970 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.048369 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.048625 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-654v2" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.049475 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.051965 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.062880 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100369 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100423 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100468 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100499 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100540 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bf947e0-b153-4bed-93e6-35eef778805c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100594 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29ssq\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-kube-api-access-29ssq\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100640 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bf947e0-b153-4bed-93e6-35eef778805c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100731 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.100767 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.203267 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.203344 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.203402 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bf947e0-b153-4bed-93e6-35eef778805c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.203435 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29ssq\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-kube-api-access-29ssq\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.203490 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bf947e0-b153-4bed-93e6-35eef778805c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.203536 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.204524 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.204943 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.206063 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.207097 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.207386 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bf947e0-b153-4bed-93e6-35eef778805c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.207749 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.207887 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.209987 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.211216 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.214730 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.214779 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c97404806f98214d28011a0771e9a1e94251d52895e19c3f4ff7480a3060213c/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.215483 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bf947e0-b153-4bed-93e6-35eef778805c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.220349 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29ssq\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-kube-api-access-29ssq\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.252866 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"rabbitmq-server-0\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.363337 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.364524 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.367027 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.367184 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.367304 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.367523 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.368846 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r9gwl" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.381767 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.413595 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsxv\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-kube-api-access-sxsxv\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.413724 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.413763 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.413788 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b687fc46-d052-4e19-a322-17b720747080-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.413832 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.413886 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.414020 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.414091 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b687fc46-d052-4e19-a322-17b720747080-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.414142 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.459612 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.475395 5049 generic.go:334] "Generic (PLEG): container finished" podID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" containerID="d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298" exitCode=0 Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.475463 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" event={"ID":"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3","Type":"ContainerDied","Data":"d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298"} Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.475488 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" event={"ID":"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3","Type":"ContainerStarted","Data":"7f8df0ad8365c99fb62809180d5e9668483e61f815cc9511351339dd67ad917d"} Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.477533 5049 generic.go:334] "Generic (PLEG): container finished" podID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" containerID="57e57e2aaea33ad4b286deb570618509d0a8bf4a8ac24925cdb5af395a6e0c5b" exitCode=0 Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.477590 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" event={"ID":"235d59dd-01cd-4eca-ba15-fca1e9d9241f","Type":"ContainerDied","Data":"57e57e2aaea33ad4b286deb570618509d0a8bf4a8ac24925cdb5af395a6e0c5b"} Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.477634 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" event={"ID":"235d59dd-01cd-4eca-ba15-fca1e9d9241f","Type":"ContainerStarted","Data":"2e45e7061187f183aa89ef58994e3d99e2dddec3ca6c7e0c6bbc5f4839f6b3f7"} Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.515332 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.515796 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.515923 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.516025 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b687fc46-d052-4e19-a322-17b720747080-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.516147 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.516236 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxsxv\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-kube-api-access-sxsxv\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.516338 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.516408 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.516483 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b687fc46-d052-4e19-a322-17b720747080-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.517756 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.519144 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.520395 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.520431 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8f92899ad06486a1cbd0ade420cbde09769e29832e08ef880c263f5b9eae28b/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.523444 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.523655 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b687fc46-d052-4e19-a322-17b720747080-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.523734 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.523864 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.527332 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b687fc46-d052-4e19-a322-17b720747080-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.537566 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxsxv\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-kube-api-access-sxsxv\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.572671 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: E0127 18:17:41.672164 5049 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Jan 27 18:17:41 crc kubenswrapper[5049]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 18:17:41 crc kubenswrapper[5049]: > podSandboxID="7f8df0ad8365c99fb62809180d5e9668483e61f815cc9511351339dd67ad917d" Jan 27 18:17:41 crc kubenswrapper[5049]: E0127 18:17:41.672544 5049 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 27 18:17:41 crc kubenswrapper[5049]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rktv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d7b5456f5-hfw4q_openstack(ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Jan 27 18:17:41 crc kubenswrapper[5049]: > logger="UnhandledError" Jan 27 18:17:41 crc kubenswrapper[5049]: E0127 18:17:41.673686 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" podUID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.685624 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:17:41 crc kubenswrapper[5049]: I0127 18:17:41.911321 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:17:41 crc kubenswrapper[5049]: W0127 18:17:41.915909 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bf947e0_b153_4bed_93e6_35eef778805c.slice/crio-388194fb476f3c17b5b13aef4b9110ea22446d02210701737d967ca3753462fa WatchSource:0}: Error finding container 388194fb476f3c17b5b13aef4b9110ea22446d02210701737d967ca3753462fa: Status 404 returned error can't find the container with id 388194fb476f3c17b5b13aef4b9110ea22446d02210701737d967ca3753462fa Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.122601 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.472359 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.481335 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.488830 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.491046 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jdvc9" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.497225 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.522854 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.525605 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.532414 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.532913 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bf947e0-b153-4bed-93e6-35eef778805c","Type":"ContainerStarted","Data":"388194fb476f3c17b5b13aef4b9110ea22446d02210701737d967ca3753462fa"} Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.537123 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" event={"ID":"235d59dd-01cd-4eca-ba15-fca1e9d9241f","Type":"ContainerStarted","Data":"9fdcaaced66898946e0e4c34729333d369b9e3b942e4c486bd8296942b742859"} Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.537346 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.542213 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b687fc46-d052-4e19-a322-17b720747080","Type":"ContainerStarted","Data":"9198c97d56ae7a0128a82a9cbba4a1b39ba9ba9cf90d83d5fe7702cedede1dc2"} Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.573290 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" podStartSLOduration=2.5732749569999998 podStartE2EDuration="2.573274957s" podCreationTimestamp="2026-01-27 18:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:17:42.56765386 +0000 UTC m=+4837.666627439" watchObservedRunningTime="2026-01-27 18:17:42.573274957 +0000 UTC m=+4837.672248506" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.632726 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-kolla-config\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.632790 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.632921 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv7mt\" (UniqueName: \"kubernetes.io/projected/62237ec0-8150-441d-ad88-6b48f26a9aa7-kube-api-access-vv7mt\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.632961 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62237ec0-8150-441d-ad88-6b48f26a9aa7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.633007 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62237ec0-8150-441d-ad88-6b48f26a9aa7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.633034 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-config-data-default\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.633080 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-10017b91-8c06-47d5-85e3-faf8db9beb34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10017b91-8c06-47d5-85e3-faf8db9beb34\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.633105 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62237ec0-8150-441d-ad88-6b48f26a9aa7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.733943 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62237ec0-8150-441d-ad88-6b48f26a9aa7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.734010 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-config-data-default\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.734027 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62237ec0-8150-441d-ad88-6b48f26a9aa7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.734050 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-10017b91-8c06-47d5-85e3-faf8db9beb34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10017b91-8c06-47d5-85e3-faf8db9beb34\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.734067 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62237ec0-8150-441d-ad88-6b48f26a9aa7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.734097 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-kolla-config\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.734122 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.734204 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv7mt\" (UniqueName: \"kubernetes.io/projected/62237ec0-8150-441d-ad88-6b48f26a9aa7-kube-api-access-vv7mt\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.735097 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-kolla-config\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.735468 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/62237ec0-8150-441d-ad88-6b48f26a9aa7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.736115 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-config-data-default\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.736392 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62237ec0-8150-441d-ad88-6b48f26a9aa7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.738480 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/62237ec0-8150-441d-ad88-6b48f26a9aa7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.738630 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62237ec0-8150-441d-ad88-6b48f26a9aa7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.743785 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.743825 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-10017b91-8c06-47d5-85e3-faf8db9beb34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10017b91-8c06-47d5-85e3-faf8db9beb34\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/327cf94f93fddaec8b212dcf3ecd47d625ec319d2edb48ffb9b44cd243d34e0b/globalmount\"" pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.749963 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv7mt\" (UniqueName: \"kubernetes.io/projected/62237ec0-8150-441d-ad88-6b48f26a9aa7-kube-api-access-vv7mt\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.767983 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-10017b91-8c06-47d5-85e3-faf8db9beb34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10017b91-8c06-47d5-85e3-faf8db9beb34\") pod \"openstack-galera-0\" (UID: \"62237ec0-8150-441d-ad88-6b48f26a9aa7\") " pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.846847 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.848009 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.850184 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.850517 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qtwdw" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.862289 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.864129 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.936901 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f2a2869-0227-4632-98b0-faced10a3a7d-kolla-config\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.936982 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f2a2869-0227-4632-98b0-faced10a3a7d-config-data\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:42 crc kubenswrapper[5049]: I0127 18:17:42.937014 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n7sj\" (UniqueName: \"kubernetes.io/projected/8f2a2869-0227-4632-98b0-faced10a3a7d-kube-api-access-4n7sj\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.038725 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f2a2869-0227-4632-98b0-faced10a3a7d-kolla-config\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.038831 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f2a2869-0227-4632-98b0-faced10a3a7d-config-data\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.038878 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n7sj\" (UniqueName: \"kubernetes.io/projected/8f2a2869-0227-4632-98b0-faced10a3a7d-kube-api-access-4n7sj\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.039796 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f2a2869-0227-4632-98b0-faced10a3a7d-kolla-config\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.040005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f2a2869-0227-4632-98b0-faced10a3a7d-config-data\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.060390 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n7sj\" (UniqueName: \"kubernetes.io/projected/8f2a2869-0227-4632-98b0-faced10a3a7d-kube-api-access-4n7sj\") pod \"memcached-0\" (UID: \"8f2a2869-0227-4632-98b0-faced10a3a7d\") " pod="openstack/memcached-0" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.170402 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.352607 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.551768 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bf947e0-b153-4bed-93e6-35eef778805c","Type":"ContainerStarted","Data":"ae4331d1ea8476faf0063ce94bac4e77cd5e7c4909281c9af3cdb53ecd05aeb6"} Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.553867 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b687fc46-d052-4e19-a322-17b720747080","Type":"ContainerStarted","Data":"11cc7f388ec05015c84bc69449fb5202d0f75eb9ad3ed10f23ba505e2bdfa46c"} Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.555532 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62237ec0-8150-441d-ad88-6b48f26a9aa7","Type":"ContainerStarted","Data":"accfd01672b3115bb3cd23b753ca2f29f5e2ff417f6acb8261535ab8b3668723"} Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.555586 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62237ec0-8150-441d-ad88-6b48f26a9aa7","Type":"ContainerStarted","Data":"f190189f594f9fcd2542e28f36de67ffb60590abb0063caa75061095e9579af2"} Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.557406 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" event={"ID":"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3","Type":"ContainerStarted","Data":"f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31"} Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.557654 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.594437 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" podStartSLOduration=4.594416207 podStartE2EDuration="4.594416207s" podCreationTimestamp="2026-01-27 18:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:17:43.590591781 +0000 UTC m=+4838.689565340" watchObservedRunningTime="2026-01-27 18:17:43.594416207 +0000 UTC m=+4838.693389756" Jan 27 18:17:43 crc kubenswrapper[5049]: W0127 18:17:43.697815 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f2a2869_0227_4632_98b0_faced10a3a7d.slice/crio-eb3967290f403e80c3a137e29568638cf1191e11da2f628ca265c495e97530a6 WatchSource:0}: Error finding container eb3967290f403e80c3a137e29568638cf1191e11da2f628ca265c495e97530a6: Status 404 returned error can't find the container with id eb3967290f403e80c3a137e29568638cf1191e11da2f628ca265c495e97530a6 Jan 27 18:17:43 crc kubenswrapper[5049]: I0127 18:17:43.702074 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.021553 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.023115 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.026658 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.027036 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.027361 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.027850 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-6hblt" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.043023 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.156962 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbgr5\" (UniqueName: \"kubernetes.io/projected/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-kube-api-access-sbgr5\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.157052 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.157071 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.157096 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.157133 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.157311 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.157423 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.157531 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68a2b484-a74d-45ae-b902-f94a80c122de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68a2b484-a74d-45ae-b902-f94a80c122de\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.258707 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbgr5\" (UniqueName: \"kubernetes.io/projected/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-kube-api-access-sbgr5\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.258817 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.258854 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.258899 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.258969 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.259037 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.259110 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.259166 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-68a2b484-a74d-45ae-b902-f94a80c122de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68a2b484-a74d-45ae-b902-f94a80c122de\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.259723 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.259956 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.261119 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.262645 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.265407 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.265481 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-68a2b484-a74d-45ae-b902-f94a80c122de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68a2b484-a74d-45ae-b902-f94a80c122de\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/653f7b29cb9ed00e71113fb2090666f2ae95a6c2f1a7c6f61131223a202af70a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.268821 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.269458 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.302859 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbgr5\" (UniqueName: \"kubernetes.io/projected/6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca-kube-api-access-sbgr5\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.316342 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-68a2b484-a74d-45ae-b902-f94a80c122de\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68a2b484-a74d-45ae-b902-f94a80c122de\") pod \"openstack-cell1-galera-0\" (UID: \"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca\") " pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.342742 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.566712 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f2a2869-0227-4632-98b0-faced10a3a7d","Type":"ContainerStarted","Data":"7148b8b80f3aebc4add69a692d88c2135ed5f26160187331f74d0f754afa7d46"} Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.567047 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8f2a2869-0227-4632-98b0-faced10a3a7d","Type":"ContainerStarted","Data":"eb3967290f403e80c3a137e29568638cf1191e11da2f628ca265c495e97530a6"} Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.599737 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.599707565 podStartE2EDuration="2.599707565s" podCreationTimestamp="2026-01-27 18:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:17:44.590277532 +0000 UTC m=+4839.689251101" watchObservedRunningTime="2026-01-27 18:17:44.599707565 +0000 UTC m=+4839.698681134" Jan 27 18:17:44 crc kubenswrapper[5049]: I0127 18:17:44.874570 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 18:17:44 crc kubenswrapper[5049]: W0127 18:17:44.879859 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b2b0ceb_df56_437d_a7e1_e79a57a7e5ca.slice/crio-e5e88c058f96bbdfadaf687b8e208c301e8e72dc6d1291652ed9a7868182977d WatchSource:0}: Error finding container e5e88c058f96bbdfadaf687b8e208c301e8e72dc6d1291652ed9a7868182977d: Status 404 returned error can't find the container with id e5e88c058f96bbdfadaf687b8e208c301e8e72dc6d1291652ed9a7868182977d Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.008938 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxjgg"] Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.010816 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.018916 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxjgg"] Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.078553 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswhn\" (UniqueName: \"kubernetes.io/projected/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-kube-api-access-gswhn\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.079144 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-catalog-content\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.079494 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-utilities\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.181728 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gswhn\" (UniqueName: \"kubernetes.io/projected/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-kube-api-access-gswhn\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.182117 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-catalog-content\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.182166 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-utilities\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.182567 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-catalog-content\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.182648 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-utilities\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.560617 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gswhn\" (UniqueName: \"kubernetes.io/projected/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-kube-api-access-gswhn\") pod \"community-operators-pxjgg\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.574466 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca","Type":"ContainerStarted","Data":"e5e88c058f96bbdfadaf687b8e208c301e8e72dc6d1291652ed9a7868182977d"} Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.574632 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 18:17:45 crc kubenswrapper[5049]: I0127 18:17:45.637017 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:46 crc kubenswrapper[5049]: I0127 18:17:46.085700 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxjgg"] Jan 27 18:17:46 crc kubenswrapper[5049]: I0127 18:17:46.588131 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca","Type":"ContainerStarted","Data":"7daf8a2e3174777fb28f93d7ca469440e41e78f63adae9e018aa912e9a0eda1b"} Jan 27 18:17:46 crc kubenswrapper[5049]: I0127 18:17:46.590420 5049 generic.go:334] "Generic (PLEG): container finished" podID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerID="f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668" exitCode=0 Jan 27 18:17:46 crc kubenswrapper[5049]: I0127 18:17:46.590619 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxjgg" event={"ID":"255fb1fd-704f-4a96-a4f1-d61ca3e960e3","Type":"ContainerDied","Data":"f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668"} Jan 27 18:17:46 crc kubenswrapper[5049]: I0127 18:17:46.590782 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxjgg" event={"ID":"255fb1fd-704f-4a96-a4f1-d61ca3e960e3","Type":"ContainerStarted","Data":"f1cd4d5d0ea137340b2649bcee40e8fa176e957084db92fa7b0b22161f856c34"} Jan 27 18:17:46 crc kubenswrapper[5049]: I0127 18:17:46.593592 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:17:47 crc kubenswrapper[5049]: I0127 18:17:47.603804 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxjgg" event={"ID":"255fb1fd-704f-4a96-a4f1-d61ca3e960e3","Type":"ContainerStarted","Data":"92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60"} Jan 27 18:17:47 crc kubenswrapper[5049]: I0127 18:17:47.608533 5049 generic.go:334] "Generic (PLEG): container finished" podID="62237ec0-8150-441d-ad88-6b48f26a9aa7" containerID="accfd01672b3115bb3cd23b753ca2f29f5e2ff417f6acb8261535ab8b3668723" exitCode=0 Jan 27 18:17:47 crc kubenswrapper[5049]: I0127 18:17:47.608657 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62237ec0-8150-441d-ad88-6b48f26a9aa7","Type":"ContainerDied","Data":"accfd01672b3115bb3cd23b753ca2f29f5e2ff417f6acb8261535ab8b3668723"} Jan 27 18:17:48 crc kubenswrapper[5049]: I0127 18:17:48.171900 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 18:17:48 crc kubenswrapper[5049]: I0127 18:17:48.616716 5049 generic.go:334] "Generic (PLEG): container finished" podID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerID="92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60" exitCode=0 Jan 27 18:17:48 crc kubenswrapper[5049]: I0127 18:17:48.617534 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxjgg" event={"ID":"255fb1fd-704f-4a96-a4f1-d61ca3e960e3","Type":"ContainerDied","Data":"92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60"} Jan 27 18:17:48 crc kubenswrapper[5049]: I0127 18:17:48.620093 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"62237ec0-8150-441d-ad88-6b48f26a9aa7","Type":"ContainerStarted","Data":"3d57f04770f3a1deb45aad4607a1410158786cd220cf11f419214a6112044829"} Jan 27 18:17:48 crc kubenswrapper[5049]: I0127 18:17:48.673784 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.673767184 podStartE2EDuration="7.673767184s" podCreationTimestamp="2026-01-27 18:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:17:48.657791588 +0000 UTC m=+4843.756765167" watchObservedRunningTime="2026-01-27 18:17:48.673767184 +0000 UTC m=+4843.772740733" Jan 27 18:17:49 crc kubenswrapper[5049]: I0127 18:17:49.644169 5049 generic.go:334] "Generic (PLEG): container finished" podID="6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca" containerID="7daf8a2e3174777fb28f93d7ca469440e41e78f63adae9e018aa912e9a0eda1b" exitCode=0 Jan 27 18:17:49 crc kubenswrapper[5049]: I0127 18:17:49.644298 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca","Type":"ContainerDied","Data":"7daf8a2e3174777fb28f93d7ca469440e41e78f63adae9e018aa912e9a0eda1b"} Jan 27 18:17:49 crc kubenswrapper[5049]: I0127 18:17:49.679311 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxjgg" event={"ID":"255fb1fd-704f-4a96-a4f1-d61ca3e960e3","Type":"ContainerStarted","Data":"530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970"} Jan 27 18:17:49 crc kubenswrapper[5049]: I0127 18:17:49.712312 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxjgg" podStartSLOduration=2.913065391 podStartE2EDuration="5.712285011s" podCreationTimestamp="2026-01-27 18:17:44 +0000 UTC" firstStartedPulling="2026-01-27 18:17:46.59305321 +0000 UTC m=+4841.692026809" lastFinishedPulling="2026-01-27 18:17:49.39227288 +0000 UTC m=+4844.491246429" observedRunningTime="2026-01-27 18:17:49.705437449 +0000 UTC m=+4844.804411068" watchObservedRunningTime="2026-01-27 18:17:49.712285011 +0000 UTC m=+4844.811258600" Jan 27 18:17:50 crc kubenswrapper[5049]: I0127 18:17:50.236589 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:50 crc kubenswrapper[5049]: I0127 18:17:50.511620 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:17:50 crc kubenswrapper[5049]: I0127 18:17:50.565013 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-hfw4q"] Jan 27 18:17:50 crc kubenswrapper[5049]: I0127 18:17:50.645998 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:17:50 crc kubenswrapper[5049]: E0127 18:17:50.646194 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:17:50 crc kubenswrapper[5049]: I0127 18:17:50.677580 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" podUID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" containerName="dnsmasq-dns" containerID="cri-o://f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31" gracePeriod=10 Jan 27 18:17:50 crc kubenswrapper[5049]: I0127 18:17:50.677907 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca","Type":"ContainerStarted","Data":"9c6965fc43f36315b9f5fc3048037274ca0e4d47ae673ff39a1bd1fb4486a061"} Jan 27 18:17:50 crc kubenswrapper[5049]: I0127 18:17:50.716286 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.716264741 podStartE2EDuration="8.716264741s" podCreationTimestamp="2026-01-27 18:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:17:50.71227864 +0000 UTC m=+4845.811252209" watchObservedRunningTime="2026-01-27 18:17:50.716264741 +0000 UTC m=+4845.815238280" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.093164 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.182059 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rktv7\" (UniqueName: \"kubernetes.io/projected/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-kube-api-access-rktv7\") pod \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.182140 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-config\") pod \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.182213 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-dns-svc\") pod \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\" (UID: \"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3\") " Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.188849 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-kube-api-access-rktv7" (OuterVolumeSpecName: "kube-api-access-rktv7") pod "ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" (UID: "ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3"). InnerVolumeSpecName "kube-api-access-rktv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.212540 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" (UID: "ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.214893 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-config" (OuterVolumeSpecName: "config") pod "ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" (UID: "ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.284765 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.284799 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rktv7\" (UniqueName: \"kubernetes.io/projected/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-kube-api-access-rktv7\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.284810 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.688003 5049 generic.go:334] "Generic (PLEG): container finished" podID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" containerID="f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31" exitCode=0 Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.688069 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.688090 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" event={"ID":"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3","Type":"ContainerDied","Data":"f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31"} Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.688179 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-hfw4q" event={"ID":"ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3","Type":"ContainerDied","Data":"7f8df0ad8365c99fb62809180d5e9668483e61f815cc9511351339dd67ad917d"} Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.688215 5049 scope.go:117] "RemoveContainer" containerID="f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.704087 5049 scope.go:117] "RemoveContainer" containerID="d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.715138 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-hfw4q"] Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.723205 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-hfw4q"] Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.728489 5049 scope.go:117] "RemoveContainer" containerID="f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31" Jan 27 18:17:51 crc kubenswrapper[5049]: E0127 18:17:51.729396 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31\": container with ID starting with f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31 not found: ID does not exist" containerID="f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.729540 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31"} err="failed to get container status \"f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31\": rpc error: code = NotFound desc = could not find container \"f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31\": container with ID starting with f938656f71893ed5f223404c1ec6d0d5eb46f899eeb22a27ca2928727ef2ef31 not found: ID does not exist" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.729658 5049 scope.go:117] "RemoveContainer" containerID="d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298" Jan 27 18:17:51 crc kubenswrapper[5049]: E0127 18:17:51.730349 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298\": container with ID starting with d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298 not found: ID does not exist" containerID="d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298" Jan 27 18:17:51 crc kubenswrapper[5049]: I0127 18:17:51.730423 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298"} err="failed to get container status \"d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298\": rpc error: code = NotFound desc = could not find container \"d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298\": container with ID starting with d99e8e561fabf971328a1cacbe7ac5e426216d27551c1ae3dd27b3a1827ed298 not found: ID does not exist" Jan 27 18:17:52 crc kubenswrapper[5049]: I0127 18:17:52.862976 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 18:17:52 crc kubenswrapper[5049]: I0127 18:17:52.863337 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 18:17:52 crc kubenswrapper[5049]: I0127 18:17:52.964457 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 18:17:53 crc kubenswrapper[5049]: I0127 18:17:53.663890 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" path="/var/lib/kubelet/pods/ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3/volumes" Jan 27 18:17:53 crc kubenswrapper[5049]: I0127 18:17:53.800418 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 18:17:54 crc kubenswrapper[5049]: I0127 18:17:54.343041 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:54 crc kubenswrapper[5049]: I0127 18:17:54.343632 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:54 crc kubenswrapper[5049]: I0127 18:17:54.450884 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:54 crc kubenswrapper[5049]: I0127 18:17:54.825624 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 18:17:55 crc kubenswrapper[5049]: I0127 18:17:55.637653 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:55 crc kubenswrapper[5049]: I0127 18:17:55.639291 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:55 crc kubenswrapper[5049]: I0127 18:17:55.720372 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:55 crc kubenswrapper[5049]: I0127 18:17:55.805246 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:55 crc kubenswrapper[5049]: I0127 18:17:55.968610 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxjgg"] Jan 27 18:17:57 crc kubenswrapper[5049]: I0127 18:17:57.753019 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pxjgg" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerName="registry-server" containerID="cri-o://530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970" gracePeriod=2 Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.269447 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.405195 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-catalog-content\") pod \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.405263 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gswhn\" (UniqueName: \"kubernetes.io/projected/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-kube-api-access-gswhn\") pod \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.405286 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-utilities\") pod \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\" (UID: \"255fb1fd-704f-4a96-a4f1-d61ca3e960e3\") " Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.406086 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-utilities" (OuterVolumeSpecName: "utilities") pod "255fb1fd-704f-4a96-a4f1-d61ca3e960e3" (UID: "255fb1fd-704f-4a96-a4f1-d61ca3e960e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.406529 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.410551 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-kube-api-access-gswhn" (OuterVolumeSpecName: "kube-api-access-gswhn") pod "255fb1fd-704f-4a96-a4f1-d61ca3e960e3" (UID: "255fb1fd-704f-4a96-a4f1-d61ca3e960e3"). InnerVolumeSpecName "kube-api-access-gswhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.458047 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "255fb1fd-704f-4a96-a4f1-d61ca3e960e3" (UID: "255fb1fd-704f-4a96-a4f1-d61ca3e960e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.508174 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gswhn\" (UniqueName: \"kubernetes.io/projected/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-kube-api-access-gswhn\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.508203 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/255fb1fd-704f-4a96-a4f1-d61ca3e960e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.767042 5049 generic.go:334] "Generic (PLEG): container finished" podID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerID="530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970" exitCode=0 Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.767229 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxjgg" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.767263 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxjgg" event={"ID":"255fb1fd-704f-4a96-a4f1-d61ca3e960e3","Type":"ContainerDied","Data":"530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970"} Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.770861 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxjgg" event={"ID":"255fb1fd-704f-4a96-a4f1-d61ca3e960e3","Type":"ContainerDied","Data":"f1cd4d5d0ea137340b2649bcee40e8fa176e957084db92fa7b0b22161f856c34"} Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.771524 5049 scope.go:117] "RemoveContainer" containerID="530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.829238 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxjgg"] Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.837369 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pxjgg"] Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.838121 5049 scope.go:117] "RemoveContainer" containerID="92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.856973 5049 scope.go:117] "RemoveContainer" containerID="f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.882449 5049 scope.go:117] "RemoveContainer" containerID="530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970" Jan 27 18:17:58 crc kubenswrapper[5049]: E0127 18:17:58.882907 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970\": container with ID starting with 530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970 not found: ID does not exist" containerID="530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.882959 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970"} err="failed to get container status \"530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970\": rpc error: code = NotFound desc = could not find container \"530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970\": container with ID starting with 530d571cea967ef457bab40454c661262f076f57da588d06cdeae23c8d835970 not found: ID does not exist" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.882992 5049 scope.go:117] "RemoveContainer" containerID="92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60" Jan 27 18:17:58 crc kubenswrapper[5049]: E0127 18:17:58.883232 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60\": container with ID starting with 92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60 not found: ID does not exist" containerID="92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.883263 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60"} err="failed to get container status \"92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60\": rpc error: code = NotFound desc = could not find container \"92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60\": container with ID starting with 92fe75c3147389b9d597f37e514c6ccda8be49a8e710892e0f1185a6b354ab60 not found: ID does not exist" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.883286 5049 scope.go:117] "RemoveContainer" containerID="f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668" Jan 27 18:17:58 crc kubenswrapper[5049]: E0127 18:17:58.883621 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668\": container with ID starting with f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668 not found: ID does not exist" containerID="f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668" Jan 27 18:17:58 crc kubenswrapper[5049]: I0127 18:17:58.883693 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668"} err="failed to get container status \"f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668\": rpc error: code = NotFound desc = could not find container \"f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668\": container with ID starting with f0326b68f6546b8830be1e324777e7146d0cc47c266b30fccbd0812dff7b1668 not found: ID does not exist" Jan 27 18:17:59 crc kubenswrapper[5049]: I0127 18:17:59.658858 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" path="/var/lib/kubelet/pods/255fb1fd-704f-4a96-a4f1-d61ca3e960e3/volumes" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.494057 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6lsft"] Jan 27 18:18:01 crc kubenswrapper[5049]: E0127 18:18:01.494652 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerName="registry-server" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.494663 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerName="registry-server" Jan 27 18:18:01 crc kubenswrapper[5049]: E0127 18:18:01.494916 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerName="extract-content" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.494926 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerName="extract-content" Jan 27 18:18:01 crc kubenswrapper[5049]: E0127 18:18:01.494938 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" containerName="init" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.494943 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" containerName="init" Jan 27 18:18:01 crc kubenswrapper[5049]: E0127 18:18:01.494951 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerName="extract-utilities" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.494959 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerName="extract-utilities" Jan 27 18:18:01 crc kubenswrapper[5049]: E0127 18:18:01.494972 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" containerName="dnsmasq-dns" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.494978 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" containerName="dnsmasq-dns" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.495106 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="255fb1fd-704f-4a96-a4f1-d61ca3e960e3" containerName="registry-server" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.495116 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3b5ce6-bceb-4f20-aaaa-7f92bc3cbaa3" containerName="dnsmasq-dns" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.495585 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.498714 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.506272 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6lsft"] Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.677527 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-operator-scripts\") pod \"root-account-create-update-6lsft\" (UID: \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\") " pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.677631 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnmtq\" (UniqueName: \"kubernetes.io/projected/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-kube-api-access-vnmtq\") pod \"root-account-create-update-6lsft\" (UID: \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\") " pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.790471 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-operator-scripts\") pod \"root-account-create-update-6lsft\" (UID: \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\") " pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.790628 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnmtq\" (UniqueName: \"kubernetes.io/projected/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-kube-api-access-vnmtq\") pod \"root-account-create-update-6lsft\" (UID: \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\") " pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.792341 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-operator-scripts\") pod \"root-account-create-update-6lsft\" (UID: \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\") " pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.825287 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnmtq\" (UniqueName: \"kubernetes.io/projected/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-kube-api-access-vnmtq\") pod \"root-account-create-update-6lsft\" (UID: \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\") " pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:01 crc kubenswrapper[5049]: I0127 18:18:01.839102 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:02 crc kubenswrapper[5049]: I0127 18:18:02.247884 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6lsft"] Jan 27 18:18:02 crc kubenswrapper[5049]: I0127 18:18:02.804559 5049 generic.go:334] "Generic (PLEG): container finished" podID="f900ed2d-1e71-426d-b9d2-a826d3fbddc3" containerID="dacddea103072d0a42b71a23be2f283adff00cc36fd12e8c98fd60d11d432368" exitCode=0 Jan 27 18:18:02 crc kubenswrapper[5049]: I0127 18:18:02.804615 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6lsft" event={"ID":"f900ed2d-1e71-426d-b9d2-a826d3fbddc3","Type":"ContainerDied","Data":"dacddea103072d0a42b71a23be2f283adff00cc36fd12e8c98fd60d11d432368"} Jan 27 18:18:02 crc kubenswrapper[5049]: I0127 18:18:02.804652 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6lsft" event={"ID":"f900ed2d-1e71-426d-b9d2-a826d3fbddc3","Type":"ContainerStarted","Data":"72aa7aee6cad023f3b5335bb9f75c46274dab31d2de0c770509eea1aa9894de3"} Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.159902 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.329857 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-operator-scripts\") pod \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\" (UID: \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\") " Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.330019 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnmtq\" (UniqueName: \"kubernetes.io/projected/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-kube-api-access-vnmtq\") pod \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\" (UID: \"f900ed2d-1e71-426d-b9d2-a826d3fbddc3\") " Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.331465 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f900ed2d-1e71-426d-b9d2-a826d3fbddc3" (UID: "f900ed2d-1e71-426d-b9d2-a826d3fbddc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.335850 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-kube-api-access-vnmtq" (OuterVolumeSpecName: "kube-api-access-vnmtq") pod "f900ed2d-1e71-426d-b9d2-a826d3fbddc3" (UID: "f900ed2d-1e71-426d-b9d2-a826d3fbddc3"). InnerVolumeSpecName "kube-api-access-vnmtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.431859 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.431923 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnmtq\" (UniqueName: \"kubernetes.io/projected/f900ed2d-1e71-426d-b9d2-a826d3fbddc3-kube-api-access-vnmtq\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.824529 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6lsft" event={"ID":"f900ed2d-1e71-426d-b9d2-a826d3fbddc3","Type":"ContainerDied","Data":"72aa7aee6cad023f3b5335bb9f75c46274dab31d2de0c770509eea1aa9894de3"} Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.824587 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72aa7aee6cad023f3b5335bb9f75c46274dab31d2de0c770509eea1aa9894de3" Jan 27 18:18:04 crc kubenswrapper[5049]: I0127 18:18:04.824663 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6lsft" Jan 27 18:18:05 crc kubenswrapper[5049]: I0127 18:18:05.651849 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:18:05 crc kubenswrapper[5049]: E0127 18:18:05.652133 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:18:08 crc kubenswrapper[5049]: I0127 18:18:08.007450 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6lsft"] Jan 27 18:18:08 crc kubenswrapper[5049]: I0127 18:18:08.020784 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6lsft"] Jan 27 18:18:09 crc kubenswrapper[5049]: I0127 18:18:09.663647 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f900ed2d-1e71-426d-b9d2-a826d3fbddc3" path="/var/lib/kubelet/pods/f900ed2d-1e71-426d-b9d2-a826d3fbddc3/volumes" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.024244 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vfktn"] Jan 27 18:18:13 crc kubenswrapper[5049]: E0127 18:18:13.024972 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f900ed2d-1e71-426d-b9d2-a826d3fbddc3" containerName="mariadb-account-create-update" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.025004 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f900ed2d-1e71-426d-b9d2-a826d3fbddc3" containerName="mariadb-account-create-update" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.025242 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f900ed2d-1e71-426d-b9d2-a826d3fbddc3" containerName="mariadb-account-create-update" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.025823 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.028836 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.035266 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vfktn"] Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.174033 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzngg\" (UniqueName: \"kubernetes.io/projected/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-kube-api-access-kzngg\") pod \"root-account-create-update-vfktn\" (UID: \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\") " pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.174224 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-operator-scripts\") pod \"root-account-create-update-vfktn\" (UID: \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\") " pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.276438 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzngg\" (UniqueName: \"kubernetes.io/projected/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-kube-api-access-kzngg\") pod \"root-account-create-update-vfktn\" (UID: \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\") " pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.276605 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-operator-scripts\") pod \"root-account-create-update-vfktn\" (UID: \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\") " pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.278244 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-operator-scripts\") pod \"root-account-create-update-vfktn\" (UID: \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\") " pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.303919 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzngg\" (UniqueName: \"kubernetes.io/projected/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-kube-api-access-kzngg\") pod \"root-account-create-update-vfktn\" (UID: \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\") " pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.354567 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.788948 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vfktn"] Jan 27 18:18:13 crc kubenswrapper[5049]: I0127 18:18:13.895524 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfktn" event={"ID":"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd","Type":"ContainerStarted","Data":"07297e181a98bbd2544369c4b179c0396f6da65d6f22a0f63d775ed372d069e7"} Jan 27 18:18:14 crc kubenswrapper[5049]: I0127 18:18:14.909768 5049 generic.go:334] "Generic (PLEG): container finished" podID="9045d3d5-d562-40d5-b0f3-5261ca0ce8bd" containerID="6e4fbd4f0b8940ca283748461c91f4da4b9b6a3512080e4a5f9714d7eac7f27d" exitCode=0 Jan 27 18:18:14 crc kubenswrapper[5049]: I0127 18:18:14.909840 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfktn" event={"ID":"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd","Type":"ContainerDied","Data":"6e4fbd4f0b8940ca283748461c91f4da4b9b6a3512080e4a5f9714d7eac7f27d"} Jan 27 18:18:15 crc kubenswrapper[5049]: I0127 18:18:15.924648 5049 generic.go:334] "Generic (PLEG): container finished" podID="0bf947e0-b153-4bed-93e6-35eef778805c" containerID="ae4331d1ea8476faf0063ce94bac4e77cd5e7c4909281c9af3cdb53ecd05aeb6" exitCode=0 Jan 27 18:18:15 crc kubenswrapper[5049]: I0127 18:18:15.924773 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bf947e0-b153-4bed-93e6-35eef778805c","Type":"ContainerDied","Data":"ae4331d1ea8476faf0063ce94bac4e77cd5e7c4909281c9af3cdb53ecd05aeb6"} Jan 27 18:18:15 crc kubenswrapper[5049]: I0127 18:18:15.928797 5049 generic.go:334] "Generic (PLEG): container finished" podID="b687fc46-d052-4e19-a322-17b720747080" containerID="11cc7f388ec05015c84bc69449fb5202d0f75eb9ad3ed10f23ba505e2bdfa46c" exitCode=0 Jan 27 18:18:15 crc kubenswrapper[5049]: I0127 18:18:15.928853 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b687fc46-d052-4e19-a322-17b720747080","Type":"ContainerDied","Data":"11cc7f388ec05015c84bc69449fb5202d0f75eb9ad3ed10f23ba505e2bdfa46c"} Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.271753 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.434378 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzngg\" (UniqueName: \"kubernetes.io/projected/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-kube-api-access-kzngg\") pod \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\" (UID: \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\") " Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.434501 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-operator-scripts\") pod \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\" (UID: \"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd\") " Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.435208 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9045d3d5-d562-40d5-b0f3-5261ca0ce8bd" (UID: "9045d3d5-d562-40d5-b0f3-5261ca0ce8bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.438869 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-kube-api-access-kzngg" (OuterVolumeSpecName: "kube-api-access-kzngg") pod "9045d3d5-d562-40d5-b0f3-5261ca0ce8bd" (UID: "9045d3d5-d562-40d5-b0f3-5261ca0ce8bd"). InnerVolumeSpecName "kube-api-access-kzngg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.535979 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.536023 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzngg\" (UniqueName: \"kubernetes.io/projected/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd-kube-api-access-kzngg\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.937506 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfktn" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.937505 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfktn" event={"ID":"9045d3d5-d562-40d5-b0f3-5261ca0ce8bd","Type":"ContainerDied","Data":"07297e181a98bbd2544369c4b179c0396f6da65d6f22a0f63d775ed372d069e7"} Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.937644 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07297e181a98bbd2544369c4b179c0396f6da65d6f22a0f63d775ed372d069e7" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.941233 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bf947e0-b153-4bed-93e6-35eef778805c","Type":"ContainerStarted","Data":"4c96614256b8ddac32012434f1892486b416b02b9d291c26de73807bb73bdae7"} Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.941448 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.944608 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b687fc46-d052-4e19-a322-17b720747080","Type":"ContainerStarted","Data":"ad666b516e6350d5abf82391a313bd38618ca768b76049f062ca61ae31099c81"} Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.944861 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:16 crc kubenswrapper[5049]: I0127 18:18:16.995193 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.995162087 podStartE2EDuration="36.995162087s" podCreationTimestamp="2026-01-27 18:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:18:16.976142536 +0000 UTC m=+4872.075116105" watchObservedRunningTime="2026-01-27 18:18:16.995162087 +0000 UTC m=+4872.094135646" Jan 27 18:18:17 crc kubenswrapper[5049]: I0127 18:18:17.014457 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.014440806 podStartE2EDuration="37.014440806s" podCreationTimestamp="2026-01-27 18:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:18:17.007334577 +0000 UTC m=+4872.106308126" watchObservedRunningTime="2026-01-27 18:18:17.014440806 +0000 UTC m=+4872.113414355" Jan 27 18:18:20 crc kubenswrapper[5049]: I0127 18:18:20.646482 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:18:20 crc kubenswrapper[5049]: I0127 18:18:20.979281 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"cbf92023f90d1db31fc1c7a24fc76dae2e5df0bcbeedcf8ac7fb0089684228a6"} Jan 27 18:18:31 crc kubenswrapper[5049]: I0127 18:18:31.473258 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 18:18:31 crc kubenswrapper[5049]: I0127 18:18:31.689070 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.445377 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lkkzt"] Jan 27 18:18:37 crc kubenswrapper[5049]: E0127 18:18:37.449760 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9045d3d5-d562-40d5-b0f3-5261ca0ce8bd" containerName="mariadb-account-create-update" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.449834 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9045d3d5-d562-40d5-b0f3-5261ca0ce8bd" containerName="mariadb-account-create-update" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.450167 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9045d3d5-d562-40d5-b0f3-5261ca0ce8bd" containerName="mariadb-account-create-update" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.451148 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.464237 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lkkzt"] Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.544392 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8qrj\" (UniqueName: \"kubernetes.io/projected/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-kube-api-access-k8qrj\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.544696 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-config\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.544782 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.646882 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.647031 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8qrj\" (UniqueName: \"kubernetes.io/projected/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-kube-api-access-k8qrj\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.647091 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-config\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.648586 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.648771 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-config\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.670590 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8qrj\" (UniqueName: \"kubernetes.io/projected/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-kube-api-access-k8qrj\") pod \"dnsmasq-dns-5b7946d7b9-lkkzt\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:37 crc kubenswrapper[5049]: I0127 18:18:37.809864 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:38 crc kubenswrapper[5049]: I0127 18:18:38.227531 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:18:38 crc kubenswrapper[5049]: I0127 18:18:38.352663 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lkkzt"] Jan 27 18:18:39 crc kubenswrapper[5049]: I0127 18:18:39.009414 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:18:39 crc kubenswrapper[5049]: I0127 18:18:39.155087 5049 generic.go:334] "Generic (PLEG): container finished" podID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" containerID="21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288" exitCode=0 Jan 27 18:18:39 crc kubenswrapper[5049]: I0127 18:18:39.155145 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" event={"ID":"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3","Type":"ContainerDied","Data":"21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288"} Jan 27 18:18:39 crc kubenswrapper[5049]: I0127 18:18:39.155203 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" event={"ID":"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3","Type":"ContainerStarted","Data":"93e46dc034a3ad21f31d5c9e99f43cbd33bb0d5f28fd6014e842dbf91ac0b43d"} Jan 27 18:18:39 crc kubenswrapper[5049]: I0127 18:18:39.903532 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0bf947e0-b153-4bed-93e6-35eef778805c" containerName="rabbitmq" containerID="cri-o://4c96614256b8ddac32012434f1892486b416b02b9d291c26de73807bb73bdae7" gracePeriod=604799 Jan 27 18:18:40 crc kubenswrapper[5049]: I0127 18:18:40.185068 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" event={"ID":"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3","Type":"ContainerStarted","Data":"882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf"} Jan 27 18:18:40 crc kubenswrapper[5049]: I0127 18:18:40.185159 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:40 crc kubenswrapper[5049]: I0127 18:18:40.218653 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" podStartSLOduration=3.218628991 podStartE2EDuration="3.218628991s" podCreationTimestamp="2026-01-27 18:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:18:40.204114745 +0000 UTC m=+4895.303088294" watchObservedRunningTime="2026-01-27 18:18:40.218628991 +0000 UTC m=+4895.317602550" Jan 27 18:18:40 crc kubenswrapper[5049]: I0127 18:18:40.887297 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b687fc46-d052-4e19-a322-17b720747080" containerName="rabbitmq" containerID="cri-o://ad666b516e6350d5abf82391a313bd38618ca768b76049f062ca61ae31099c81" gracePeriod=604799 Jan 27 18:18:41 crc kubenswrapper[5049]: I0127 18:18:41.461080 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0bf947e0-b153-4bed-93e6-35eef778805c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.247:5672: connect: connection refused" Jan 27 18:18:41 crc kubenswrapper[5049]: I0127 18:18:41.686254 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b687fc46-d052-4e19-a322-17b720747080" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.248:5672: connect: connection refused" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.263830 5049 generic.go:334] "Generic (PLEG): container finished" podID="0bf947e0-b153-4bed-93e6-35eef778805c" containerID="4c96614256b8ddac32012434f1892486b416b02b9d291c26de73807bb73bdae7" exitCode=0 Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.264001 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bf947e0-b153-4bed-93e6-35eef778805c","Type":"ContainerDied","Data":"4c96614256b8ddac32012434f1892486b416b02b9d291c26de73807bb73bdae7"} Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.566459 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701002 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701083 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-confd\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701150 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bf947e0-b153-4bed-93e6-35eef778805c-pod-info\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701190 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-plugins-conf\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701236 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bf947e0-b153-4bed-93e6-35eef778805c-erlang-cookie-secret\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701287 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-plugins\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701326 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29ssq\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-kube-api-access-29ssq\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701392 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-server-conf\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.701452 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-erlang-cookie\") pod \"0bf947e0-b153-4bed-93e6-35eef778805c\" (UID: \"0bf947e0-b153-4bed-93e6-35eef778805c\") " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.702135 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.702736 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.702746 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.706900 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf947e0-b153-4bed-93e6-35eef778805c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.707208 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-kube-api-access-29ssq" (OuterVolumeSpecName: "kube-api-access-29ssq") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "kube-api-access-29ssq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.708545 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0bf947e0-b153-4bed-93e6-35eef778805c-pod-info" (OuterVolumeSpecName: "pod-info") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.719082 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3" (OuterVolumeSpecName: "persistence") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "pvc-56a5ca29-0654-4d03-983b-601420b597f3". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.726232 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-server-conf" (OuterVolumeSpecName: "server-conf") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.789972 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0bf947e0-b153-4bed-93e6-35eef778805c" (UID: "0bf947e0-b153-4bed-93e6-35eef778805c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804035 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804107 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") on node \"crc\" " Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804147 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804164 5049 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bf947e0-b153-4bed-93e6-35eef778805c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804177 5049 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804191 5049 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bf947e0-b153-4bed-93e6-35eef778805c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804206 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bf947e0-b153-4bed-93e6-35eef778805c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804219 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29ssq\" (UniqueName: \"kubernetes.io/projected/0bf947e0-b153-4bed-93e6-35eef778805c-kube-api-access-29ssq\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.804233 5049 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bf947e0-b153-4bed-93e6-35eef778805c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.818630 5049 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.818823 5049 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-56a5ca29-0654-4d03-983b-601420b597f3" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3") on node "crc" Jan 27 18:18:46 crc kubenswrapper[5049]: I0127 18:18:46.906510 5049 reconciler_common.go:293] "Volume detached for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.280152 5049 generic.go:334] "Generic (PLEG): container finished" podID="b687fc46-d052-4e19-a322-17b720747080" containerID="ad666b516e6350d5abf82391a313bd38618ca768b76049f062ca61ae31099c81" exitCode=0 Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.280270 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b687fc46-d052-4e19-a322-17b720747080","Type":"ContainerDied","Data":"ad666b516e6350d5abf82391a313bd38618ca768b76049f062ca61ae31099c81"} Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.283753 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0bf947e0-b153-4bed-93e6-35eef778805c","Type":"ContainerDied","Data":"388194fb476f3c17b5b13aef4b9110ea22446d02210701737d967ca3753462fa"} Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.283799 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.283836 5049 scope.go:117] "RemoveContainer" containerID="4c96614256b8ddac32012434f1892486b416b02b9d291c26de73807bb73bdae7" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.322712 5049 scope.go:117] "RemoveContainer" containerID="ae4331d1ea8476faf0063ce94bac4e77cd5e7c4909281c9af3cdb53ecd05aeb6" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.325927 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.343651 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.362084 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:18:47 crc kubenswrapper[5049]: E0127 18:18:47.362509 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf947e0-b153-4bed-93e6-35eef778805c" containerName="setup-container" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.362524 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf947e0-b153-4bed-93e6-35eef778805c" containerName="setup-container" Jan 27 18:18:47 crc kubenswrapper[5049]: E0127 18:18:47.362547 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf947e0-b153-4bed-93e6-35eef778805c" containerName="rabbitmq" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.362557 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf947e0-b153-4bed-93e6-35eef778805c" containerName="rabbitmq" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.362763 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf947e0-b153-4bed-93e6-35eef778805c" containerName="rabbitmq" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.363800 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.367355 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.367618 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.367760 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.367955 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.367973 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-654v2" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.370094 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421089 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b46d5528-2364-438e-8b91-18a085b8c625-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421134 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421193 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b46d5528-2364-438e-8b91-18a085b8c625-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421226 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421279 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b46d5528-2364-438e-8b91-18a085b8c625-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421306 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421329 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421386 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xcp\" (UniqueName: \"kubernetes.io/projected/b46d5528-2364-438e-8b91-18a085b8c625-kube-api-access-q6xcp\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.421410 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b46d5528-2364-438e-8b91-18a085b8c625-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523176 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523248 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b46d5528-2364-438e-8b91-18a085b8c625-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523273 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523290 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523335 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6xcp\" (UniqueName: \"kubernetes.io/projected/b46d5528-2364-438e-8b91-18a085b8c625-kube-api-access-q6xcp\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523352 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b46d5528-2364-438e-8b91-18a085b8c625-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523371 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b46d5528-2364-438e-8b91-18a085b8c625-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523392 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.523440 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b46d5528-2364-438e-8b91-18a085b8c625-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.524411 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.525298 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b46d5528-2364-438e-8b91-18a085b8c625-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.525379 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.528283 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b46d5528-2364-438e-8b91-18a085b8c625-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.529796 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b46d5528-2364-438e-8b91-18a085b8c625-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.530615 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.530714 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c97404806f98214d28011a0771e9a1e94251d52895e19c3f4ff7480a3060213c/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.532520 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b46d5528-2364-438e-8b91-18a085b8c625-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.533976 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b46d5528-2364-438e-8b91-18a085b8c625-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.541737 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6xcp\" (UniqueName: \"kubernetes.io/projected/b46d5528-2364-438e-8b91-18a085b8c625-kube-api-access-q6xcp\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.565066 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56a5ca29-0654-4d03-983b-601420b597f3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a5ca29-0654-4d03-983b-601420b597f3\") pod \"rabbitmq-server-0\" (UID: \"b46d5528-2364-438e-8b91-18a085b8c625\") " pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.593386 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.626977 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxsxv\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-kube-api-access-sxsxv\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.627382 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b687fc46-d052-4e19-a322-17b720747080-pod-info\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.629395 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.629535 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-server-conf\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.629627 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b687fc46-d052-4e19-a322-17b720747080-erlang-cookie-secret\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.629758 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-erlang-cookie\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.629870 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-confd\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.630009 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-plugins-conf\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.630185 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-plugins\") pod \"b687fc46-d052-4e19-a322-17b720747080\" (UID: \"b687fc46-d052-4e19-a322-17b720747080\") " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.632314 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-kube-api-access-sxsxv" (OuterVolumeSpecName: "kube-api-access-sxsxv") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "kube-api-access-sxsxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.633190 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b687fc46-d052-4e19-a322-17b720747080-pod-info" (OuterVolumeSpecName: "pod-info") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.633741 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.633964 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.634399 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.635146 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b687fc46-d052-4e19-a322-17b720747080-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.649915 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc" (OuterVolumeSpecName: "persistence") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.659326 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf947e0-b153-4bed-93e6-35eef778805c" path="/var/lib/kubelet/pods/0bf947e0-b153-4bed-93e6-35eef778805c/volumes" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.676288 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-server-conf" (OuterVolumeSpecName: "server-conf") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.694056 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.732228 5049 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b687fc46-d052-4e19-a322-17b720747080-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.732279 5049 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") on node \"crc\" " Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.732294 5049 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.732306 5049 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b687fc46-d052-4e19-a322-17b720747080-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.732315 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.732323 5049 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b687fc46-d052-4e19-a322-17b720747080-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.732334 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.732343 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxsxv\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-kube-api-access-sxsxv\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.745257 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b687fc46-d052-4e19-a322-17b720747080" (UID: "b687fc46-d052-4e19-a322-17b720747080"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.748281 5049 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.748437 5049 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc") on node "crc" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.812196 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.838887 5049 reconciler_common.go:293] "Volume detached for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.838935 5049 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b687fc46-d052-4e19-a322-17b720747080-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.863793 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5gwkx"] Jan 27 18:18:47 crc kubenswrapper[5049]: I0127 18:18:47.864010 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" podUID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" containerName="dnsmasq-dns" containerID="cri-o://9fdcaaced66898946e0e4c34729333d369b9e3b942e4c486bd8296942b742859" gracePeriod=10 Jan 27 18:18:48 crc kubenswrapper[5049]: W0127 18:18:48.140709 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46d5528_2364_438e_8b91_18a085b8c625.slice/crio-76baa03c39b489d0605e71d7e94d93f73e1ffe5ae51904dee3a96d4eed18c368 WatchSource:0}: Error finding container 76baa03c39b489d0605e71d7e94d93f73e1ffe5ae51904dee3a96d4eed18c368: Status 404 returned error can't find the container with id 76baa03c39b489d0605e71d7e94d93f73e1ffe5ae51904dee3a96d4eed18c368 Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.141434 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.292251 5049 generic.go:334] "Generic (PLEG): container finished" podID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" containerID="9fdcaaced66898946e0e4c34729333d369b9e3b942e4c486bd8296942b742859" exitCode=0 Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.292349 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" event={"ID":"235d59dd-01cd-4eca-ba15-fca1e9d9241f","Type":"ContainerDied","Data":"9fdcaaced66898946e0e4c34729333d369b9e3b942e4c486bd8296942b742859"} Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.294457 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b687fc46-d052-4e19-a322-17b720747080","Type":"ContainerDied","Data":"9198c97d56ae7a0128a82a9cbba4a1b39ba9ba9cf90d83d5fe7702cedede1dc2"} Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.294509 5049 scope.go:117] "RemoveContainer" containerID="ad666b516e6350d5abf82391a313bd38618ca768b76049f062ca61ae31099c81" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.294783 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.298357 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b46d5528-2364-438e-8b91-18a085b8c625","Type":"ContainerStarted","Data":"76baa03c39b489d0605e71d7e94d93f73e1ffe5ae51904dee3a96d4eed18c368"} Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.310745 5049 scope.go:117] "RemoveContainer" containerID="11cc7f388ec05015c84bc69449fb5202d0f75eb9ad3ed10f23ba505e2bdfa46c" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.390817 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.394812 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.399805 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:18:48 crc kubenswrapper[5049]: E0127 18:18:48.400170 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b687fc46-d052-4e19-a322-17b720747080" containerName="setup-container" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.400190 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b687fc46-d052-4e19-a322-17b720747080" containerName="setup-container" Jan 27 18:18:48 crc kubenswrapper[5049]: E0127 18:18:48.400205 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b687fc46-d052-4e19-a322-17b720747080" containerName="rabbitmq" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.400212 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b687fc46-d052-4e19-a322-17b720747080" containerName="rabbitmq" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.400349 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b687fc46-d052-4e19-a322-17b720747080" containerName="rabbitmq" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.401149 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.402838 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.403783 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-r9gwl" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.403762 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.403991 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.404155 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.405724 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.456890 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhz49\" (UniqueName: \"kubernetes.io/projected/19000626-d51a-422d-b61d-caec78fc08ad-kube-api-access-nhz49\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.456938 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19000626-d51a-422d-b61d-caec78fc08ad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.456964 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.457010 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19000626-d51a-422d-b61d-caec78fc08ad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.457037 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.457149 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.457211 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19000626-d51a-422d-b61d-caec78fc08ad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.457244 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.457260 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19000626-d51a-422d-b61d-caec78fc08ad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558589 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558641 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19000626-d51a-422d-b61d-caec78fc08ad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558666 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558731 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558751 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19000626-d51a-422d-b61d-caec78fc08ad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558770 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558800 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19000626-d51a-422d-b61d-caec78fc08ad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558853 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhz49\" (UniqueName: \"kubernetes.io/projected/19000626-d51a-422d-b61d-caec78fc08ad-kube-api-access-nhz49\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.558879 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19000626-d51a-422d-b61d-caec78fc08ad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.559905 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.563062 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19000626-d51a-422d-b61d-caec78fc08ad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.563328 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.564208 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19000626-d51a-422d-b61d-caec78fc08ad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.565186 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19000626-d51a-422d-b61d-caec78fc08ad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.565410 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19000626-d51a-422d-b61d-caec78fc08ad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.565638 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/19000626-d51a-422d-b61d-caec78fc08ad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.570090 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.570120 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8f92899ad06486a1cbd0ade420cbde09769e29832e08ef880c263f5b9eae28b/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.597530 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhz49\" (UniqueName: \"kubernetes.io/projected/19000626-d51a-422d-b61d-caec78fc08ad-kube-api-access-nhz49\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.599742 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab226858-14e3-49ed-99a2-5e4e996b4fcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"19000626-d51a-422d-b61d-caec78fc08ad\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.725843 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.800175 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.863286 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-dns-svc\") pod \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.863617 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq82c\" (UniqueName: \"kubernetes.io/projected/235d59dd-01cd-4eca-ba15-fca1e9d9241f-kube-api-access-qq82c\") pod \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.863724 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-config\") pod \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\" (UID: \"235d59dd-01cd-4eca-ba15-fca1e9d9241f\") " Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.867290 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235d59dd-01cd-4eca-ba15-fca1e9d9241f-kube-api-access-qq82c" (OuterVolumeSpecName: "kube-api-access-qq82c") pod "235d59dd-01cd-4eca-ba15-fca1e9d9241f" (UID: "235d59dd-01cd-4eca-ba15-fca1e9d9241f"). InnerVolumeSpecName "kube-api-access-qq82c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.902955 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-config" (OuterVolumeSpecName: "config") pod "235d59dd-01cd-4eca-ba15-fca1e9d9241f" (UID: "235d59dd-01cd-4eca-ba15-fca1e9d9241f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.906151 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "235d59dd-01cd-4eca-ba15-fca1e9d9241f" (UID: "235d59dd-01cd-4eca-ba15-fca1e9d9241f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.966037 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.966064 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/235d59dd-01cd-4eca-ba15-fca1e9d9241f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:48 crc kubenswrapper[5049]: I0127 18:18:48.966074 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq82c\" (UniqueName: \"kubernetes.io/projected/235d59dd-01cd-4eca-ba15-fca1e9d9241f-kube-api-access-qq82c\") on node \"crc\" DevicePath \"\"" Jan 27 18:18:49 crc kubenswrapper[5049]: W0127 18:18:49.197977 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19000626_d51a_422d_b61d_caec78fc08ad.slice/crio-5111f4fdc264f9bbe1a53d6ee9eb63cdb31f129bed8f8112b9cac97a99f28152 WatchSource:0}: Error finding container 5111f4fdc264f9bbe1a53d6ee9eb63cdb31f129bed8f8112b9cac97a99f28152: Status 404 returned error can't find the container with id 5111f4fdc264f9bbe1a53d6ee9eb63cdb31f129bed8f8112b9cac97a99f28152 Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.199502 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.311092 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" event={"ID":"235d59dd-01cd-4eca-ba15-fca1e9d9241f","Type":"ContainerDied","Data":"2e45e7061187f183aa89ef58994e3d99e2dddec3ca6c7e0c6bbc5f4839f6b3f7"} Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.311151 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-5gwkx" Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.311163 5049 scope.go:117] "RemoveContainer" containerID="9fdcaaced66898946e0e4c34729333d369b9e3b942e4c486bd8296942b742859" Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.313568 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19000626-d51a-422d-b61d-caec78fc08ad","Type":"ContainerStarted","Data":"5111f4fdc264f9bbe1a53d6ee9eb63cdb31f129bed8f8112b9cac97a99f28152"} Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.327704 5049 scope.go:117] "RemoveContainer" containerID="57e57e2aaea33ad4b286deb570618509d0a8bf4a8ac24925cdb5af395a6e0c5b" Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.384289 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5gwkx"] Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.389319 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-5gwkx"] Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.661134 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" path="/var/lib/kubelet/pods/235d59dd-01cd-4eca-ba15-fca1e9d9241f/volumes" Jan 27 18:18:49 crc kubenswrapper[5049]: I0127 18:18:49.662033 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b687fc46-d052-4e19-a322-17b720747080" path="/var/lib/kubelet/pods/b687fc46-d052-4e19-a322-17b720747080/volumes" Jan 27 18:18:50 crc kubenswrapper[5049]: I0127 18:18:50.329710 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b46d5528-2364-438e-8b91-18a085b8c625","Type":"ContainerStarted","Data":"b1530698a0ae965360d0debb77db3290ce3b24fd2f2ca45612f7763e3cc31515"} Jan 27 18:18:51 crc kubenswrapper[5049]: I0127 18:18:51.339665 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19000626-d51a-422d-b61d-caec78fc08ad","Type":"ContainerStarted","Data":"48215d1c0cb3e46e9df267cb56b89dcc35ed189f8e80a976c02f8044f5c418b1"} Jan 27 18:19:23 crc kubenswrapper[5049]: I0127 18:19:23.595343 5049 generic.go:334] "Generic (PLEG): container finished" podID="19000626-d51a-422d-b61d-caec78fc08ad" containerID="48215d1c0cb3e46e9df267cb56b89dcc35ed189f8e80a976c02f8044f5c418b1" exitCode=0 Jan 27 18:19:23 crc kubenswrapper[5049]: I0127 18:19:23.595418 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19000626-d51a-422d-b61d-caec78fc08ad","Type":"ContainerDied","Data":"48215d1c0cb3e46e9df267cb56b89dcc35ed189f8e80a976c02f8044f5c418b1"} Jan 27 18:19:23 crc kubenswrapper[5049]: I0127 18:19:23.598779 5049 generic.go:334] "Generic (PLEG): container finished" podID="b46d5528-2364-438e-8b91-18a085b8c625" containerID="b1530698a0ae965360d0debb77db3290ce3b24fd2f2ca45612f7763e3cc31515" exitCode=0 Jan 27 18:19:23 crc kubenswrapper[5049]: I0127 18:19:23.598847 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b46d5528-2364-438e-8b91-18a085b8c625","Type":"ContainerDied","Data":"b1530698a0ae965360d0debb77db3290ce3b24fd2f2ca45612f7763e3cc31515"} Jan 27 18:19:24 crc kubenswrapper[5049]: I0127 18:19:24.611446 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b46d5528-2364-438e-8b91-18a085b8c625","Type":"ContainerStarted","Data":"d3a1d65bb27d56b6284038b8a1706f3ee0319beea6c1b071e05e76201ef188c3"} Jan 27 18:19:24 crc kubenswrapper[5049]: I0127 18:19:24.612081 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 18:19:24 crc kubenswrapper[5049]: I0127 18:19:24.614521 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"19000626-d51a-422d-b61d-caec78fc08ad","Type":"ContainerStarted","Data":"79fa385f705928c8951bd3b0b18954461de96dd48127b638589822cd252d4851"} Jan 27 18:19:24 crc kubenswrapper[5049]: I0127 18:19:24.614828 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:19:24 crc kubenswrapper[5049]: I0127 18:19:24.649494 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.64947115 podStartE2EDuration="37.64947115s" podCreationTimestamp="2026-01-27 18:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:19:24.646864837 +0000 UTC m=+4939.745838456" watchObservedRunningTime="2026-01-27 18:19:24.64947115 +0000 UTC m=+4939.748444699" Jan 27 18:19:24 crc kubenswrapper[5049]: I0127 18:19:24.690069 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.690040793 podStartE2EDuration="36.690040793s" podCreationTimestamp="2026-01-27 18:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:19:24.683033428 +0000 UTC m=+4939.782006977" watchObservedRunningTime="2026-01-27 18:19:24.690040793 +0000 UTC m=+4939.789014382" Jan 27 18:19:37 crc kubenswrapper[5049]: I0127 18:19:37.699962 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 18:19:38 crc kubenswrapper[5049]: I0127 18:19:38.729052 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.169824 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 18:19:50 crc kubenswrapper[5049]: E0127 18:19:50.171536 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" containerName="dnsmasq-dns" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.171572 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" containerName="dnsmasq-dns" Jan 27 18:19:50 crc kubenswrapper[5049]: E0127 18:19:50.171623 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" containerName="init" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.171636 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" containerName="init" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.171917 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="235d59dd-01cd-4eca-ba15-fca1e9d9241f" containerName="dnsmasq-dns" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.172831 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.175737 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-d672s" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.182842 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.271750 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zxk2\" (UniqueName: \"kubernetes.io/projected/c15e88f9-a946-49dd-bdef-d07a6f87a3d0-kube-api-access-7zxk2\") pod \"mariadb-client\" (UID: \"c15e88f9-a946-49dd-bdef-d07a6f87a3d0\") " pod="openstack/mariadb-client" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.374633 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zxk2\" (UniqueName: \"kubernetes.io/projected/c15e88f9-a946-49dd-bdef-d07a6f87a3d0-kube-api-access-7zxk2\") pod \"mariadb-client\" (UID: \"c15e88f9-a946-49dd-bdef-d07a6f87a3d0\") " pod="openstack/mariadb-client" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.405635 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zxk2\" (UniqueName: \"kubernetes.io/projected/c15e88f9-a946-49dd-bdef-d07a6f87a3d0-kube-api-access-7zxk2\") pod \"mariadb-client\" (UID: \"c15e88f9-a946-49dd-bdef-d07a6f87a3d0\") " pod="openstack/mariadb-client" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.505959 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:19:50 crc kubenswrapper[5049]: I0127 18:19:50.843102 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:19:50 crc kubenswrapper[5049]: W0127 18:19:50.856583 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc15e88f9_a946_49dd_bdef_d07a6f87a3d0.slice/crio-7f1a157a517bbadb93549f47f2d300f47672702fd77abb97b468d69765dbdb06 WatchSource:0}: Error finding container 7f1a157a517bbadb93549f47f2d300f47672702fd77abb97b468d69765dbdb06: Status 404 returned error can't find the container with id 7f1a157a517bbadb93549f47f2d300f47672702fd77abb97b468d69765dbdb06 Jan 27 18:19:51 crc kubenswrapper[5049]: I0127 18:19:51.856867 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"c15e88f9-a946-49dd-bdef-d07a6f87a3d0","Type":"ContainerStarted","Data":"2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92"} Jan 27 18:19:51 crc kubenswrapper[5049]: I0127 18:19:51.856941 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"c15e88f9-a946-49dd-bdef-d07a6f87a3d0","Type":"ContainerStarted","Data":"7f1a157a517bbadb93549f47f2d300f47672702fd77abb97b468d69765dbdb06"} Jan 27 18:19:51 crc kubenswrapper[5049]: I0127 18:19:51.891801 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.89177076 podStartE2EDuration="1.89177076s" podCreationTimestamp="2026-01-27 18:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:19:51.877938224 +0000 UTC m=+4966.976911773" watchObservedRunningTime="2026-01-27 18:19:51.89177076 +0000 UTC m=+4966.990744309" Jan 27 18:20:07 crc kubenswrapper[5049]: I0127 18:20:07.299764 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:20:07 crc kubenswrapper[5049]: I0127 18:20:07.301323 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="c15e88f9-a946-49dd-bdef-d07a6f87a3d0" containerName="mariadb-client" containerID="cri-o://2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92" gracePeriod=30 Jan 27 18:20:07 crc kubenswrapper[5049]: I0127 18:20:07.895869 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.015287 5049 generic.go:334] "Generic (PLEG): container finished" podID="c15e88f9-a946-49dd-bdef-d07a6f87a3d0" containerID="2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92" exitCode=143 Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.015350 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"c15e88f9-a946-49dd-bdef-d07a6f87a3d0","Type":"ContainerDied","Data":"2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92"} Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.015357 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.015392 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"c15e88f9-a946-49dd-bdef-d07a6f87a3d0","Type":"ContainerDied","Data":"7f1a157a517bbadb93549f47f2d300f47672702fd77abb97b468d69765dbdb06"} Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.015419 5049 scope.go:117] "RemoveContainer" containerID="2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92" Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.021110 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zxk2\" (UniqueName: \"kubernetes.io/projected/c15e88f9-a946-49dd-bdef-d07a6f87a3d0-kube-api-access-7zxk2\") pod \"c15e88f9-a946-49dd-bdef-d07a6f87a3d0\" (UID: \"c15e88f9-a946-49dd-bdef-d07a6f87a3d0\") " Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.028857 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15e88f9-a946-49dd-bdef-d07a6f87a3d0-kube-api-access-7zxk2" (OuterVolumeSpecName: "kube-api-access-7zxk2") pod "c15e88f9-a946-49dd-bdef-d07a6f87a3d0" (UID: "c15e88f9-a946-49dd-bdef-d07a6f87a3d0"). InnerVolumeSpecName "kube-api-access-7zxk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.047345 5049 scope.go:117] "RemoveContainer" containerID="2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92" Jan 27 18:20:08 crc kubenswrapper[5049]: E0127 18:20:08.047772 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92\": container with ID starting with 2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92 not found: ID does not exist" containerID="2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92" Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.047807 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92"} err="failed to get container status \"2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92\": rpc error: code = NotFound desc = could not find container \"2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92\": container with ID starting with 2e6d401d35cb28c12e40a1fd89f6d5431e765255884081d2b8e3daf9bf7cec92 not found: ID does not exist" Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.122887 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zxk2\" (UniqueName: \"kubernetes.io/projected/c15e88f9-a946-49dd-bdef-d07a6f87a3d0-kube-api-access-7zxk2\") on node \"crc\" DevicePath \"\"" Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.356897 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:20:08 crc kubenswrapper[5049]: I0127 18:20:08.366897 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:20:09 crc kubenswrapper[5049]: I0127 18:20:09.665975 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15e88f9-a946-49dd-bdef-d07a6f87a3d0" path="/var/lib/kubelet/pods/c15e88f9-a946-49dd-bdef-d07a6f87a3d0/volumes" Jan 27 18:20:47 crc kubenswrapper[5049]: I0127 18:20:47.781230 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:20:47 crc kubenswrapper[5049]: I0127 18:20:47.781856 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:16 crc kubenswrapper[5049]: I0127 18:21:16.284497 5049 scope.go:117] "RemoveContainer" containerID="8a35eebfb2a233bda976b5ad052dfb82668f1e022dbeb49b634ef5f93561ee61" Jan 27 18:21:17 crc kubenswrapper[5049]: I0127 18:21:17.781458 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:21:17 crc kubenswrapper[5049]: I0127 18:21:17.781724 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:47 crc kubenswrapper[5049]: I0127 18:21:47.781808 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:21:47 crc kubenswrapper[5049]: I0127 18:21:47.782640 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:21:47 crc kubenswrapper[5049]: I0127 18:21:47.782768 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:21:47 crc kubenswrapper[5049]: I0127 18:21:47.783553 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbf92023f90d1db31fc1c7a24fc76dae2e5df0bcbeedcf8ac7fb0089684228a6"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:21:47 crc kubenswrapper[5049]: I0127 18:21:47.783644 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://cbf92023f90d1db31fc1c7a24fc76dae2e5df0bcbeedcf8ac7fb0089684228a6" gracePeriod=600 Jan 27 18:21:48 crc kubenswrapper[5049]: I0127 18:21:48.877847 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="cbf92023f90d1db31fc1c7a24fc76dae2e5df0bcbeedcf8ac7fb0089684228a6" exitCode=0 Jan 27 18:21:48 crc kubenswrapper[5049]: I0127 18:21:48.877927 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"cbf92023f90d1db31fc1c7a24fc76dae2e5df0bcbeedcf8ac7fb0089684228a6"} Jan 27 18:21:48 crc kubenswrapper[5049]: I0127 18:21:48.878315 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192"} Jan 27 18:21:48 crc kubenswrapper[5049]: I0127 18:21:48.878347 5049 scope.go:117] "RemoveContainer" containerID="61ce2312dedd61c6e34d70dc19fc960eb93678640cbbfc94bc61c7c71e0faac1" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.151463 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ckz4g"] Jan 27 18:22:29 crc kubenswrapper[5049]: E0127 18:22:29.152204 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15e88f9-a946-49dd-bdef-d07a6f87a3d0" containerName="mariadb-client" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.152217 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15e88f9-a946-49dd-bdef-d07a6f87a3d0" containerName="mariadb-client" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.152370 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15e88f9-a946-49dd-bdef-d07a6f87a3d0" containerName="mariadb-client" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.153324 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.178020 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckz4g"] Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.301485 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-catalog-content\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.301565 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-utilities\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.301594 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwc6\" (UniqueName: \"kubernetes.io/projected/f4f9c5e3-afd3-41b8-9775-bef5286102b5-kube-api-access-8hwc6\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.402904 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-utilities\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.402963 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwc6\" (UniqueName: \"kubernetes.io/projected/f4f9c5e3-afd3-41b8-9775-bef5286102b5-kube-api-access-8hwc6\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.403048 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-catalog-content\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.403536 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-catalog-content\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.403821 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-utilities\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.423426 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwc6\" (UniqueName: \"kubernetes.io/projected/f4f9c5e3-afd3-41b8-9775-bef5286102b5-kube-api-access-8hwc6\") pod \"certified-operators-ckz4g\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:29 crc kubenswrapper[5049]: I0127 18:22:29.469901 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:30 crc kubenswrapper[5049]: I0127 18:22:30.018561 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckz4g"] Jan 27 18:22:30 crc kubenswrapper[5049]: I0127 18:22:30.225926 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerID="f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29" exitCode=0 Jan 27 18:22:30 crc kubenswrapper[5049]: I0127 18:22:30.225967 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckz4g" event={"ID":"f4f9c5e3-afd3-41b8-9775-bef5286102b5","Type":"ContainerDied","Data":"f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29"} Jan 27 18:22:30 crc kubenswrapper[5049]: I0127 18:22:30.225992 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckz4g" event={"ID":"f4f9c5e3-afd3-41b8-9775-bef5286102b5","Type":"ContainerStarted","Data":"e8767cb359e5d58bd426101c52dded49c9a7796321f8720244515286ac8afc5d"} Jan 27 18:22:32 crc kubenswrapper[5049]: I0127 18:22:32.243994 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerID="a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da" exitCode=0 Jan 27 18:22:32 crc kubenswrapper[5049]: I0127 18:22:32.244046 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckz4g" event={"ID":"f4f9c5e3-afd3-41b8-9775-bef5286102b5","Type":"ContainerDied","Data":"a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da"} Jan 27 18:22:33 crc kubenswrapper[5049]: I0127 18:22:33.254977 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckz4g" event={"ID":"f4f9c5e3-afd3-41b8-9775-bef5286102b5","Type":"ContainerStarted","Data":"a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee"} Jan 27 18:22:33 crc kubenswrapper[5049]: I0127 18:22:33.277108 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ckz4g" podStartSLOduration=1.889487841 podStartE2EDuration="4.277087247s" podCreationTimestamp="2026-01-27 18:22:29 +0000 UTC" firstStartedPulling="2026-01-27 18:22:30.227445431 +0000 UTC m=+5125.326418980" lastFinishedPulling="2026-01-27 18:22:32.615044837 +0000 UTC m=+5127.714018386" observedRunningTime="2026-01-27 18:22:33.274421942 +0000 UTC m=+5128.373395491" watchObservedRunningTime="2026-01-27 18:22:33.277087247 +0000 UTC m=+5128.376060796" Jan 27 18:22:39 crc kubenswrapper[5049]: I0127 18:22:39.470292 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:39 crc kubenswrapper[5049]: I0127 18:22:39.471138 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:39 crc kubenswrapper[5049]: I0127 18:22:39.519034 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:40 crc kubenswrapper[5049]: I0127 18:22:40.388767 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:40 crc kubenswrapper[5049]: I0127 18:22:40.459774 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckz4g"] Jan 27 18:22:42 crc kubenswrapper[5049]: I0127 18:22:42.338051 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ckz4g" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerName="registry-server" containerID="cri-o://a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee" gracePeriod=2 Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.799626 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.846599 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-catalog-content\") pod \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.846668 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-utilities\") pod \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.846729 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hwc6\" (UniqueName: \"kubernetes.io/projected/f4f9c5e3-afd3-41b8-9775-bef5286102b5-kube-api-access-8hwc6\") pod \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\" (UID: \"f4f9c5e3-afd3-41b8-9775-bef5286102b5\") " Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.847475 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-utilities" (OuterVolumeSpecName: "utilities") pod "f4f9c5e3-afd3-41b8-9775-bef5286102b5" (UID: "f4f9c5e3-afd3-41b8-9775-bef5286102b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.862818 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f9c5e3-afd3-41b8-9775-bef5286102b5-kube-api-access-8hwc6" (OuterVolumeSpecName: "kube-api-access-8hwc6") pod "f4f9c5e3-afd3-41b8-9775-bef5286102b5" (UID: "f4f9c5e3-afd3-41b8-9775-bef5286102b5"). InnerVolumeSpecName "kube-api-access-8hwc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.894889 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4f9c5e3-afd3-41b8-9775-bef5286102b5" (UID: "f4f9c5e3-afd3-41b8-9775-bef5286102b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.948719 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.948765 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4f9c5e3-afd3-41b8-9775-bef5286102b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:42.948783 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hwc6\" (UniqueName: \"kubernetes.io/projected/f4f9c5e3-afd3-41b8-9775-bef5286102b5-kube-api-access-8hwc6\") on node \"crc\" DevicePath \"\"" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.351559 5049 generic.go:334] "Generic (PLEG): container finished" podID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerID="a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee" exitCode=0 Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.351596 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckz4g" event={"ID":"f4f9c5e3-afd3-41b8-9775-bef5286102b5","Type":"ContainerDied","Data":"a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee"} Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.351619 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckz4g" event={"ID":"f4f9c5e3-afd3-41b8-9775-bef5286102b5","Type":"ContainerDied","Data":"e8767cb359e5d58bd426101c52dded49c9a7796321f8720244515286ac8afc5d"} Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.351639 5049 scope.go:117] "RemoveContainer" containerID="a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.351794 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckz4g" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.396084 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckz4g"] Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.399172 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ckz4g"] Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.399308 5049 scope.go:117] "RemoveContainer" containerID="a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.435786 5049 scope.go:117] "RemoveContainer" containerID="f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.459283 5049 scope.go:117] "RemoveContainer" containerID="a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee" Jan 27 18:22:43 crc kubenswrapper[5049]: E0127 18:22:43.459836 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee\": container with ID starting with a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee not found: ID does not exist" containerID="a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.459879 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee"} err="failed to get container status \"a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee\": rpc error: code = NotFound desc = could not find container \"a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee\": container with ID starting with a3ab9d43f8d0b686654903a641423fa82483b479de9a411c26e219bc33a33bee not found: ID does not exist" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.459906 5049 scope.go:117] "RemoveContainer" containerID="a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da" Jan 27 18:22:43 crc kubenswrapper[5049]: E0127 18:22:43.460856 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da\": container with ID starting with a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da not found: ID does not exist" containerID="a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.461043 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da"} err="failed to get container status \"a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da\": rpc error: code = NotFound desc = could not find container \"a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da\": container with ID starting with a87cec51e7d1096efe192fde3caf1a5b3cb3d6287deb9b379a8f00b9e8def2da not found: ID does not exist" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.461207 5049 scope.go:117] "RemoveContainer" containerID="f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29" Jan 27 18:22:43 crc kubenswrapper[5049]: E0127 18:22:43.461779 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29\": container with ID starting with f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29 not found: ID does not exist" containerID="f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.461808 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29"} err="failed to get container status \"f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29\": rpc error: code = NotFound desc = could not find container \"f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29\": container with ID starting with f71647c2a88b343bd232054e5ef2323317821db8f5bfda1a6630cfeb73fd8b29 not found: ID does not exist" Jan 27 18:22:43 crc kubenswrapper[5049]: I0127 18:22:43.661794 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" path="/var/lib/kubelet/pods/f4f9c5e3-afd3-41b8-9775-bef5286102b5/volumes" Jan 27 18:24:16 crc kubenswrapper[5049]: I0127 18:24:16.376834 5049 scope.go:117] "RemoveContainer" containerID="dacddea103072d0a42b71a23be2f283adff00cc36fd12e8c98fd60d11d432368" Jan 27 18:24:17 crc kubenswrapper[5049]: I0127 18:24:17.781892 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:24:17 crc kubenswrapper[5049]: I0127 18:24:17.781989 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.273381 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 18:24:39 crc kubenswrapper[5049]: E0127 18:24:39.274408 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerName="registry-server" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.274430 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerName="registry-server" Jan 27 18:24:39 crc kubenswrapper[5049]: E0127 18:24:39.274472 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerName="extract-content" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.274483 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerName="extract-content" Jan 27 18:24:39 crc kubenswrapper[5049]: E0127 18:24:39.274508 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerName="extract-utilities" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.274522 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerName="extract-utilities" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.274755 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f9c5e3-afd3-41b8-9775-bef5286102b5" containerName="registry-server" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.275425 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.277650 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-d672s" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.283704 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.401793 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47x25\" (UniqueName: \"kubernetes.io/projected/2c1ba67d-2fb8-42a5-a89b-12e3245907ed-kube-api-access-47x25\") pod \"mariadb-copy-data\" (UID: \"2c1ba67d-2fb8-42a5-a89b-12e3245907ed\") " pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.401869 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fcc38b22-2c42-46b5-a48c-f612ac52b86d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fcc38b22-2c42-46b5-a48c-f612ac52b86d\") pod \"mariadb-copy-data\" (UID: \"2c1ba67d-2fb8-42a5-a89b-12e3245907ed\") " pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.503585 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47x25\" (UniqueName: \"kubernetes.io/projected/2c1ba67d-2fb8-42a5-a89b-12e3245907ed-kube-api-access-47x25\") pod \"mariadb-copy-data\" (UID: \"2c1ba67d-2fb8-42a5-a89b-12e3245907ed\") " pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.503747 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fcc38b22-2c42-46b5-a48c-f612ac52b86d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fcc38b22-2c42-46b5-a48c-f612ac52b86d\") pod \"mariadb-copy-data\" (UID: \"2c1ba67d-2fb8-42a5-a89b-12e3245907ed\") " pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.507910 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.508077 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fcc38b22-2c42-46b5-a48c-f612ac52b86d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fcc38b22-2c42-46b5-a48c-f612ac52b86d\") pod \"mariadb-copy-data\" (UID: \"2c1ba67d-2fb8-42a5-a89b-12e3245907ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7f264456e8acd7c6239d42f0eacb52f10d23107117c6bc5f01a0b403e654bc83/globalmount\"" pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.529883 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47x25\" (UniqueName: \"kubernetes.io/projected/2c1ba67d-2fb8-42a5-a89b-12e3245907ed-kube-api-access-47x25\") pod \"mariadb-copy-data\" (UID: \"2c1ba67d-2fb8-42a5-a89b-12e3245907ed\") " pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.556815 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fcc38b22-2c42-46b5-a48c-f612ac52b86d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fcc38b22-2c42-46b5-a48c-f612ac52b86d\") pod \"mariadb-copy-data\" (UID: \"2c1ba67d-2fb8-42a5-a89b-12e3245907ed\") " pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.609150 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Jan 27 18:24:39 crc kubenswrapper[5049]: I0127 18:24:39.983347 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Jan 27 18:24:39 crc kubenswrapper[5049]: W0127 18:24:39.990933 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c1ba67d_2fb8_42a5_a89b_12e3245907ed.slice/crio-12ff7e05e1068627cbd982985100191bb1903270d253a1112edc007aab01ab56 WatchSource:0}: Error finding container 12ff7e05e1068627cbd982985100191bb1903270d253a1112edc007aab01ab56: Status 404 returned error can't find the container with id 12ff7e05e1068627cbd982985100191bb1903270d253a1112edc007aab01ab56 Jan 27 18:24:40 crc kubenswrapper[5049]: I0127 18:24:40.419759 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"2c1ba67d-2fb8-42a5-a89b-12e3245907ed","Type":"ContainerStarted","Data":"15e6193536ab0854936ab94bc4adf644599e8c3dc34cd023c130bc57327cbc6e"} Jan 27 18:24:40 crc kubenswrapper[5049]: I0127 18:24:40.421096 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"2c1ba67d-2fb8-42a5-a89b-12e3245907ed","Type":"ContainerStarted","Data":"12ff7e05e1068627cbd982985100191bb1903270d253a1112edc007aab01ab56"} Jan 27 18:24:43 crc kubenswrapper[5049]: I0127 18:24:43.368039 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=5.368007725 podStartE2EDuration="5.368007725s" podCreationTimestamp="2026-01-27 18:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:24:40.448199205 +0000 UTC m=+5255.547172854" watchObservedRunningTime="2026-01-27 18:24:43.368007725 +0000 UTC m=+5258.466981304" Jan 27 18:24:43 crc kubenswrapper[5049]: I0127 18:24:43.391258 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:43 crc kubenswrapper[5049]: I0127 18:24:43.395546 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:24:43 crc kubenswrapper[5049]: I0127 18:24:43.416460 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:43 crc kubenswrapper[5049]: I0127 18:24:43.570481 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crj6w\" (UniqueName: \"kubernetes.io/projected/3367fee5-30a9-43e3-a8f4-eadaccde9055-kube-api-access-crj6w\") pod \"mariadb-client\" (UID: \"3367fee5-30a9-43e3-a8f4-eadaccde9055\") " pod="openstack/mariadb-client" Jan 27 18:24:43 crc kubenswrapper[5049]: I0127 18:24:43.675976 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crj6w\" (UniqueName: \"kubernetes.io/projected/3367fee5-30a9-43e3-a8f4-eadaccde9055-kube-api-access-crj6w\") pod \"mariadb-client\" (UID: \"3367fee5-30a9-43e3-a8f4-eadaccde9055\") " pod="openstack/mariadb-client" Jan 27 18:24:43 crc kubenswrapper[5049]: I0127 18:24:43.713323 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crj6w\" (UniqueName: \"kubernetes.io/projected/3367fee5-30a9-43e3-a8f4-eadaccde9055-kube-api-access-crj6w\") pod \"mariadb-client\" (UID: \"3367fee5-30a9-43e3-a8f4-eadaccde9055\") " pod="openstack/mariadb-client" Jan 27 18:24:43 crc kubenswrapper[5049]: I0127 18:24:43.723948 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:24:44 crc kubenswrapper[5049]: I0127 18:24:44.140964 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:44 crc kubenswrapper[5049]: W0127 18:24:44.152902 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3367fee5_30a9_43e3_a8f4_eadaccde9055.slice/crio-70be7b3751e518fbd7973df428c05166513560d98289f1e87d731f2b2b0d1518 WatchSource:0}: Error finding container 70be7b3751e518fbd7973df428c05166513560d98289f1e87d731f2b2b0d1518: Status 404 returned error can't find the container with id 70be7b3751e518fbd7973df428c05166513560d98289f1e87d731f2b2b0d1518 Jan 27 18:24:44 crc kubenswrapper[5049]: I0127 18:24:44.451852 5049 generic.go:334] "Generic (PLEG): container finished" podID="3367fee5-30a9-43e3-a8f4-eadaccde9055" containerID="41c610df21115985f25c803b50a8e71c2d2b8a61ac054a25b002ede233d67a64" exitCode=0 Jan 27 18:24:44 crc kubenswrapper[5049]: I0127 18:24:44.451909 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"3367fee5-30a9-43e3-a8f4-eadaccde9055","Type":"ContainerDied","Data":"41c610df21115985f25c803b50a8e71c2d2b8a61ac054a25b002ede233d67a64"} Jan 27 18:24:44 crc kubenswrapper[5049]: I0127 18:24:44.451943 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"3367fee5-30a9-43e3-a8f4-eadaccde9055","Type":"ContainerStarted","Data":"70be7b3751e518fbd7973df428c05166513560d98289f1e87d731f2b2b0d1518"} Jan 27 18:24:45 crc kubenswrapper[5049]: I0127 18:24:45.797192 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:24:45 crc kubenswrapper[5049]: I0127 18:24:45.823182 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_3367fee5-30a9-43e3-a8f4-eadaccde9055/mariadb-client/0.log" Jan 27 18:24:45 crc kubenswrapper[5049]: I0127 18:24:45.862272 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:45 crc kubenswrapper[5049]: I0127 18:24:45.871172 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:45 crc kubenswrapper[5049]: I0127 18:24:45.912547 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crj6w\" (UniqueName: \"kubernetes.io/projected/3367fee5-30a9-43e3-a8f4-eadaccde9055-kube-api-access-crj6w\") pod \"3367fee5-30a9-43e3-a8f4-eadaccde9055\" (UID: \"3367fee5-30a9-43e3-a8f4-eadaccde9055\") " Jan 27 18:24:45 crc kubenswrapper[5049]: I0127 18:24:45.920164 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3367fee5-30a9-43e3-a8f4-eadaccde9055-kube-api-access-crj6w" (OuterVolumeSpecName: "kube-api-access-crj6w") pod "3367fee5-30a9-43e3-a8f4-eadaccde9055" (UID: "3367fee5-30a9-43e3-a8f4-eadaccde9055"). InnerVolumeSpecName "kube-api-access-crj6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.015843 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crj6w\" (UniqueName: \"kubernetes.io/projected/3367fee5-30a9-43e3-a8f4-eadaccde9055-kube-api-access-crj6w\") on node \"crc\" DevicePath \"\"" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.035633 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:46 crc kubenswrapper[5049]: E0127 18:24:46.036063 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3367fee5-30a9-43e3-a8f4-eadaccde9055" containerName="mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.036087 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="3367fee5-30a9-43e3-a8f4-eadaccde9055" containerName="mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.036286 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="3367fee5-30a9-43e3-a8f4-eadaccde9055" containerName="mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.036915 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.045979 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.219167 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9d4\" (UniqueName: \"kubernetes.io/projected/555c2819-45f7-42bb-89a4-ac86f9d8e680-kube-api-access-4g9d4\") pod \"mariadb-client\" (UID: \"555c2819-45f7-42bb-89a4-ac86f9d8e680\") " pod="openstack/mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.322911 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9d4\" (UniqueName: \"kubernetes.io/projected/555c2819-45f7-42bb-89a4-ac86f9d8e680-kube-api-access-4g9d4\") pod \"mariadb-client\" (UID: \"555c2819-45f7-42bb-89a4-ac86f9d8e680\") " pod="openstack/mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.348898 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g9d4\" (UniqueName: \"kubernetes.io/projected/555c2819-45f7-42bb-89a4-ac86f9d8e680-kube-api-access-4g9d4\") pod \"mariadb-client\" (UID: \"555c2819-45f7-42bb-89a4-ac86f9d8e680\") " pod="openstack/mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.359080 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.499874 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70be7b3751e518fbd7973df428c05166513560d98289f1e87d731f2b2b0d1518" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.499984 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.534475 5049 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="3367fee5-30a9-43e3-a8f4-eadaccde9055" podUID="555c2819-45f7-42bb-89a4-ac86f9d8e680" Jan 27 18:24:46 crc kubenswrapper[5049]: I0127 18:24:46.848306 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:46 crc kubenswrapper[5049]: W0127 18:24:46.862855 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod555c2819_45f7_42bb_89a4_ac86f9d8e680.slice/crio-bec151454041f7ea40e0f85199df6555e7af4b079eddb0de8396cd6b76a3967c WatchSource:0}: Error finding container bec151454041f7ea40e0f85199df6555e7af4b079eddb0de8396cd6b76a3967c: Status 404 returned error can't find the container with id bec151454041f7ea40e0f85199df6555e7af4b079eddb0de8396cd6b76a3967c Jan 27 18:24:47 crc kubenswrapper[5049]: I0127 18:24:47.513539 5049 generic.go:334] "Generic (PLEG): container finished" podID="555c2819-45f7-42bb-89a4-ac86f9d8e680" containerID="0ed14933c97394a1e2e8695967429081e4955e395cc27541f802de4720a1177a" exitCode=0 Jan 27 18:24:47 crc kubenswrapper[5049]: I0127 18:24:47.513617 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"555c2819-45f7-42bb-89a4-ac86f9d8e680","Type":"ContainerDied","Data":"0ed14933c97394a1e2e8695967429081e4955e395cc27541f802de4720a1177a"} Jan 27 18:24:47 crc kubenswrapper[5049]: I0127 18:24:47.513707 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"555c2819-45f7-42bb-89a4-ac86f9d8e680","Type":"ContainerStarted","Data":"bec151454041f7ea40e0f85199df6555e7af4b079eddb0de8396cd6b76a3967c"} Jan 27 18:24:47 crc kubenswrapper[5049]: I0127 18:24:47.660473 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3367fee5-30a9-43e3-a8f4-eadaccde9055" path="/var/lib/kubelet/pods/3367fee5-30a9-43e3-a8f4-eadaccde9055/volumes" Jan 27 18:24:47 crc kubenswrapper[5049]: I0127 18:24:47.781190 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:24:47 crc kubenswrapper[5049]: I0127 18:24:47.781262 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:24:48 crc kubenswrapper[5049]: I0127 18:24:48.845991 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:24:48 crc kubenswrapper[5049]: I0127 18:24:48.870353 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_555c2819-45f7-42bb-89a4-ac86f9d8e680/mariadb-client/0.log" Jan 27 18:24:48 crc kubenswrapper[5049]: I0127 18:24:48.894955 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:48 crc kubenswrapper[5049]: I0127 18:24:48.900550 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Jan 27 18:24:48 crc kubenswrapper[5049]: I0127 18:24:48.964327 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g9d4\" (UniqueName: \"kubernetes.io/projected/555c2819-45f7-42bb-89a4-ac86f9d8e680-kube-api-access-4g9d4\") pod \"555c2819-45f7-42bb-89a4-ac86f9d8e680\" (UID: \"555c2819-45f7-42bb-89a4-ac86f9d8e680\") " Jan 27 18:24:48 crc kubenswrapper[5049]: I0127 18:24:48.974595 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555c2819-45f7-42bb-89a4-ac86f9d8e680-kube-api-access-4g9d4" (OuterVolumeSpecName: "kube-api-access-4g9d4") pod "555c2819-45f7-42bb-89a4-ac86f9d8e680" (UID: "555c2819-45f7-42bb-89a4-ac86f9d8e680"). InnerVolumeSpecName "kube-api-access-4g9d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:24:49 crc kubenswrapper[5049]: I0127 18:24:49.066578 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g9d4\" (UniqueName: \"kubernetes.io/projected/555c2819-45f7-42bb-89a4-ac86f9d8e680-kube-api-access-4g9d4\") on node \"crc\" DevicePath \"\"" Jan 27 18:24:49 crc kubenswrapper[5049]: I0127 18:24:49.536197 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bec151454041f7ea40e0f85199df6555e7af4b079eddb0de8396cd6b76a3967c" Jan 27 18:24:49 crc kubenswrapper[5049]: I0127 18:24:49.536284 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Jan 27 18:24:49 crc kubenswrapper[5049]: I0127 18:24:49.659010 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555c2819-45f7-42bb-89a4-ac86f9d8e680" path="/var/lib/kubelet/pods/555c2819-45f7-42bb-89a4-ac86f9d8e680/volumes" Jan 27 18:25:17 crc kubenswrapper[5049]: I0127 18:25:17.781192 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:25:17 crc kubenswrapper[5049]: I0127 18:25:17.781792 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:25:17 crc kubenswrapper[5049]: I0127 18:25:17.781835 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:25:17 crc kubenswrapper[5049]: I0127 18:25:17.782414 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:25:17 crc kubenswrapper[5049]: I0127 18:25:17.782470 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" gracePeriod=600 Jan 27 18:25:17 crc kubenswrapper[5049]: E0127 18:25:17.937373 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:25:18 crc kubenswrapper[5049]: I0127 18:25:18.777347 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" exitCode=0 Jan 27 18:25:18 crc kubenswrapper[5049]: I0127 18:25:18.777389 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192"} Jan 27 18:25:18 crc kubenswrapper[5049]: I0127 18:25:18.777421 5049 scope.go:117] "RemoveContainer" containerID="cbf92023f90d1db31fc1c7a24fc76dae2e5df0bcbeedcf8ac7fb0089684228a6" Jan 27 18:25:18 crc kubenswrapper[5049]: I0127 18:25:18.777903 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:25:18 crc kubenswrapper[5049]: E0127 18:25:18.778091 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.455813 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:25:27 crc kubenswrapper[5049]: E0127 18:25:27.456811 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555c2819-45f7-42bb-89a4-ac86f9d8e680" containerName="mariadb-client" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.456831 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="555c2819-45f7-42bb-89a4-ac86f9d8e680" containerName="mariadb-client" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.457069 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="555c2819-45f7-42bb-89a4-ac86f9d8e680" containerName="mariadb-client" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.458484 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.460777 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.461067 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.469149 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-dvld7" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.476065 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.485325 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.486433 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.545865 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.557240 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.558741 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.575952 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.599824 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f05b7e7d-147f-44c9-a1a0-3bf20a581668-config\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.599886 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8764\" (UniqueName: \"kubernetes.io/projected/3e8ced19-22b2-4493-bd82-284b419a8045-kube-api-access-z8764\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.599940 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8ced19-22b2-4493-bd82-284b419a8045-config\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600002 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjlw\" (UniqueName: \"kubernetes.io/projected/f05b7e7d-147f-44c9-a1a0-3bf20a581668-kube-api-access-mbjlw\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600034 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-94b3aca6-827c-4bc0-b9c1-f3469b030565\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94b3aca6-827c-4bc0-b9c1-f3469b030565\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600061 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05b7e7d-147f-44c9-a1a0-3bf20a581668-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600081 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f05b7e7d-147f-44c9-a1a0-3bf20a581668-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600097 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f05b7e7d-147f-44c9-a1a0-3bf20a581668-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600206 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e8ced19-22b2-4493-bd82-284b419a8045-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600309 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8ced19-22b2-4493-bd82-284b419a8045-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600407 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8ced19-22b2-4493-bd82-284b419a8045-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.600487 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e6188a64-e4a6-4be7-9aa1-443e7099ea97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6188a64-e4a6-4be7-9aa1-443e7099ea97\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.627615 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.629320 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.638013 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.638045 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-h9rrp" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.639885 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.656963 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.663244 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.665505 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.670042 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.671923 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.679528 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.687382 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701622 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbjlw\" (UniqueName: \"kubernetes.io/projected/f05b7e7d-147f-44c9-a1a0-3bf20a581668-kube-api-access-mbjlw\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701666 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701765 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-config\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701805 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fggx7\" (UniqueName: \"kubernetes.io/projected/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-kube-api-access-fggx7\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701835 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-94b3aca6-827c-4bc0-b9c1-f3469b030565\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94b3aca6-827c-4bc0-b9c1-f3469b030565\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701868 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05b7e7d-147f-44c9-a1a0-3bf20a581668-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701895 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f05b7e7d-147f-44c9-a1a0-3bf20a581668-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701919 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f05b7e7d-147f-44c9-a1a0-3bf20a581668-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701944 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701971 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e8ced19-22b2-4493-bd82-284b419a8045-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.701991 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702029 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8ced19-22b2-4493-bd82-284b419a8045-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702062 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8ced19-22b2-4493-bd82-284b419a8045-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702094 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e6188a64-e4a6-4be7-9aa1-443e7099ea97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6188a64-e4a6-4be7-9aa1-443e7099ea97\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702118 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9340dae4-7637-4235-8945-d098374a1030\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9340dae4-7637-4235-8945-d098374a1030\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702151 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f05b7e7d-147f-44c9-a1a0-3bf20a581668-config\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702181 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-13fb2cf7-f879-4555-9004-a50a98f91231\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13fb2cf7-f879-4555-9004-a50a98f91231\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702209 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8764\" (UniqueName: \"kubernetes.io/projected/3e8ced19-22b2-4493-bd82-284b419a8045-kube-api-access-z8764\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702238 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702260 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702285 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702315 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8ced19-22b2-4493-bd82-284b419a8045-config\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702343 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxscw\" (UniqueName: \"kubernetes.io/projected/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-kube-api-access-cxscw\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702384 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.702435 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f05b7e7d-147f-44c9-a1a0-3bf20a581668-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.703197 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f05b7e7d-147f-44c9-a1a0-3bf20a581668-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.703616 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f05b7e7d-147f-44c9-a1a0-3bf20a581668-config\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.704980 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e8ced19-22b2-4493-bd82-284b419a8045-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.705580 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8ced19-22b2-4493-bd82-284b419a8045-config\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.706116 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3e8ced19-22b2-4493-bd82-284b419a8045-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.707154 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.707308 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-94b3aca6-827c-4bc0-b9c1-f3469b030565\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94b3aca6-827c-4bc0-b9c1-f3469b030565\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fce0933bf767cd586257ede1a620ff9aa7619a2acc38bd18326aa21403a65a48/globalmount\"" pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.707191 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.707558 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e6188a64-e4a6-4be7-9aa1-443e7099ea97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6188a64-e4a6-4be7-9aa1-443e7099ea97\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e7387b8712ee406fc8c2cf00acf751105d823d7f387423cd3f95bb04000d14af/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.710892 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8ced19-22b2-4493-bd82-284b419a8045-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.711278 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f05b7e7d-147f-44c9-a1a0-3bf20a581668-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.722005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbjlw\" (UniqueName: \"kubernetes.io/projected/f05b7e7d-147f-44c9-a1a0-3bf20a581668-kube-api-access-mbjlw\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.723954 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8764\" (UniqueName: \"kubernetes.io/projected/3e8ced19-22b2-4493-bd82-284b419a8045-kube-api-access-z8764\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.741903 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e6188a64-e4a6-4be7-9aa1-443e7099ea97\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e6188a64-e4a6-4be7-9aa1-443e7099ea97\") pod \"ovsdbserver-nb-0\" (UID: \"f05b7e7d-147f-44c9-a1a0-3bf20a581668\") " pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.742575 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-94b3aca6-827c-4bc0-b9c1-f3469b030565\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-94b3aca6-827c-4bc0-b9c1-f3469b030565\") pod \"ovsdbserver-nb-1\" (UID: \"3e8ced19-22b2-4493-bd82-284b419a8045\") " pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.802960 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.803004 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.803035 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6ce48c08-ae9a-41f8-9d39-eae9adde8a30\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ce48c08-ae9a-41f8-9d39-eae9adde8a30\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.803990 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-config\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804020 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804088 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f50c6cc5-4e8e-40e5-b3de-de8455a9004a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f50c6cc5-4e8e-40e5-b3de-de8455a9004a\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804166 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9340dae4-7637-4235-8945-d098374a1030\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9340dae4-7637-4235-8945-d098374a1030\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804270 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-13fb2cf7-f879-4555-9004-a50a98f91231\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13fb2cf7-f879-4555-9004-a50a98f91231\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804329 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804364 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804395 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804420 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804451 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-config\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804501 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxscw\" (UniqueName: \"kubernetes.io/projected/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-kube-api-access-cxscw\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804523 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bd16677-2b72-4120-b689-ce563651bfe9-config\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804543 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2pdp\" (UniqueName: \"kubernetes.io/projected/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-kube-api-access-q2pdp\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804579 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8bd16677-2b72-4120-b689-ce563651bfe9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804624 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804659 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804707 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjzn\" (UniqueName: \"kubernetes.io/projected/8bd16677-2b72-4120-b689-ce563651bfe9-kube-api-access-brjzn\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804737 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd16677-2b72-4120-b689-ce563651bfe9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804761 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-config\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804809 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804839 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804868 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fggx7\" (UniqueName: \"kubernetes.io/projected/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-kube-api-access-fggx7\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.804899 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd16677-2b72-4120-b689-ce563651bfe9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.806173 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.806221 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9340dae4-7637-4235-8945-d098374a1030\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9340dae4-7637-4235-8945-d098374a1030\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bbf9394f3ab2178d1caff765de50352b2f99fedc4377a75c2303993578022de9/globalmount\"" pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.806333 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.806821 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.807760 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.807793 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-13fb2cf7-f879-4555-9004-a50a98f91231\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13fb2cf7-f879-4555-9004-a50a98f91231\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e6fe20c85fcdbd801594106f8109af0ebff75bb5e521684a91f31db85d45b12b/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.808475 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-config\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.809580 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.810706 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.810882 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.823249 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxscw\" (UniqueName: \"kubernetes.io/projected/26d69dfa-16a7-4f78-89e8-4786c7efbfa4-kube-api-access-cxscw\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.834961 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fggx7\" (UniqueName: \"kubernetes.io/projected/ae4f99ce-4541-4d85-a1a6-64a8295dbd37-kube-api-access-fggx7\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.849552 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.850044 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-13fb2cf7-f879-4555-9004-a50a98f91231\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13fb2cf7-f879-4555-9004-a50a98f91231\") pod \"ovsdbserver-sb-0\" (UID: \"ae4f99ce-4541-4d85-a1a6-64a8295dbd37\") " pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.853116 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9340dae4-7637-4235-8945-d098374a1030\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9340dae4-7637-4235-8945-d098374a1030\") pod \"ovsdbserver-nb-2\" (UID: \"26d69dfa-16a7-4f78-89e8-4786c7efbfa4\") " pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.861857 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.876885 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906493 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brjzn\" (UniqueName: \"kubernetes.io/projected/8bd16677-2b72-4120-b689-ce563651bfe9-kube-api-access-brjzn\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906540 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd16677-2b72-4120-b689-ce563651bfe9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906568 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906588 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906606 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd16677-2b72-4120-b689-ce563651bfe9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906653 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6ce48c08-ae9a-41f8-9d39-eae9adde8a30\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ce48c08-ae9a-41f8-9d39-eae9adde8a30\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906707 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f50c6cc5-4e8e-40e5-b3de-de8455a9004a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f50c6cc5-4e8e-40e5-b3de-de8455a9004a\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906761 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906779 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-config\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906803 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bd16677-2b72-4120-b689-ce563651bfe9-config\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906818 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2pdp\" (UniqueName: \"kubernetes.io/projected/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-kube-api-access-q2pdp\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.906842 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8bd16677-2b72-4120-b689-ce563651bfe9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.907372 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8bd16677-2b72-4120-b689-ce563651bfe9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.908732 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bd16677-2b72-4120-b689-ce563651bfe9-config\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.908754 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-config\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.908803 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.909138 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8bd16677-2b72-4120-b689-ce563651bfe9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.910105 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.911693 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.911738 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f50c6cc5-4e8e-40e5-b3de-de8455a9004a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f50c6cc5-4e8e-40e5-b3de-de8455a9004a\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f011533f4f45c89947a3aeae5ef25b77f64fae47357c0d0d36e168ef3170d4a3/globalmount\"" pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.913825 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd16677-2b72-4120-b689-ce563651bfe9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.922196 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.922254 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6ce48c08-ae9a-41f8-9d39-eae9adde8a30\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ce48c08-ae9a-41f8-9d39-eae9adde8a30\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be35e2ab0fe9bf86499d7f16ffc0d5280ca93bbb57cd68ce6d4d9840523cf064/globalmount\"" pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.923437 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2pdp\" (UniqueName: \"kubernetes.io/projected/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-kube-api-access-q2pdp\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.924021 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d2aff60-fc81-4c03-8ad4-6555e3a3a41d-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.925657 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brjzn\" (UniqueName: \"kubernetes.io/projected/8bd16677-2b72-4120-b689-ce563651bfe9-kube-api-access-brjzn\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.944900 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f50c6cc5-4e8e-40e5-b3de-de8455a9004a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f50c6cc5-4e8e-40e5-b3de-de8455a9004a\") pod \"ovsdbserver-sb-2\" (UID: \"8bd16677-2b72-4120-b689-ce563651bfe9\") " pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.951385 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6ce48c08-ae9a-41f8-9d39-eae9adde8a30\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6ce48c08-ae9a-41f8-9d39-eae9adde8a30\") pod \"ovsdbserver-sb-1\" (UID: \"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d\") " pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.954568 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.987157 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:27 crc kubenswrapper[5049]: I0127 18:25:27.992854 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.411658 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.504921 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Jan 27 18:25:28 crc kubenswrapper[5049]: W0127 18:25:28.507345 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26d69dfa_16a7_4f78_89e8_4786c7efbfa4.slice/crio-36a050808d440fe6910c5f75a27a9906875a19d7df1d53d1e31acc2898ff28d2 WatchSource:0}: Error finding container 36a050808d440fe6910c5f75a27a9906875a19d7df1d53d1e31acc2898ff28d2: Status 404 returned error can't find the container with id 36a050808d440fe6910c5f75a27a9906875a19d7df1d53d1e31acc2898ff28d2 Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.608906 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 18:25:28 crc kubenswrapper[5049]: W0127 18:25:28.609870 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae4f99ce_4541_4d85_a1a6_64a8295dbd37.slice/crio-ec2aeca35bfd1313932eccd33e9cafaea6c8a7c125d788827d074d562b5db06d WatchSource:0}: Error finding container ec2aeca35bfd1313932eccd33e9cafaea6c8a7c125d788827d074d562b5db06d: Status 404 returned error can't find the container with id ec2aeca35bfd1313932eccd33e9cafaea6c8a7c125d788827d074d562b5db06d Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.875788 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3e8ced19-22b2-4493-bd82-284b419a8045","Type":"ContainerStarted","Data":"69f0e056be95b05607efa44afcbe7bf9c8b9d2a92ad9107aa6cc47d9965288ef"} Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.875829 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3e8ced19-22b2-4493-bd82-284b419a8045","Type":"ContainerStarted","Data":"ea36f95260b66db919d440fc7a14cdf609eaabac8fc58095291c45518678d5cb"} Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.875839 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3e8ced19-22b2-4493-bd82-284b419a8045","Type":"ContainerStarted","Data":"51cfd8be2b21315c2d4ccd34029e45daf305440ef4ca6d5078ff2654961f5eef"} Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.877450 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"26d69dfa-16a7-4f78-89e8-4786c7efbfa4","Type":"ContainerStarted","Data":"7f64ca7a618360f98cbab3a37c1f5164e04b1bd540d7fd9c13b53864920f9a14"} Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.877490 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"26d69dfa-16a7-4f78-89e8-4786c7efbfa4","Type":"ContainerStarted","Data":"4def1e57e7105858b68571318176b2993cc5e7bd8b1aaca84afb705b928614bc"} Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.877502 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"26d69dfa-16a7-4f78-89e8-4786c7efbfa4","Type":"ContainerStarted","Data":"36a050808d440fe6910c5f75a27a9906875a19d7df1d53d1e31acc2898ff28d2"} Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.878701 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae4f99ce-4541-4d85-a1a6-64a8295dbd37","Type":"ContainerStarted","Data":"fe2d83ff97b848fa5fbfb0805ad14ddfd5a65579149c789710f92c559df70495"} Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.878741 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae4f99ce-4541-4d85-a1a6-64a8295dbd37","Type":"ContainerStarted","Data":"ec2aeca35bfd1313932eccd33e9cafaea6c8a7c125d788827d074d562b5db06d"} Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.897448 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=2.897430385 podStartE2EDuration="2.897430385s" podCreationTimestamp="2026-01-27 18:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:28.891474567 +0000 UTC m=+5303.990448116" watchObservedRunningTime="2026-01-27 18:25:28.897430385 +0000 UTC m=+5303.996403934" Jan 27 18:25:28 crc kubenswrapper[5049]: I0127 18:25:28.910331 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=2.910315177 podStartE2EDuration="2.910315177s" podCreationTimestamp="2026-01-27 18:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:28.909229697 +0000 UTC m=+5304.008203246" watchObservedRunningTime="2026-01-27 18:25:28.910315177 +0000 UTC m=+5304.009288726" Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.307902 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 18:25:29 crc kubenswrapper[5049]: W0127 18:25:29.310938 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf05b7e7d_147f_44c9_a1a0_3bf20a581668.slice/crio-15aa103cf8f19e5bcdcabadcb993729d2a80ba1634ac187af3d658c9cfe08d99 WatchSource:0}: Error finding container 15aa103cf8f19e5bcdcabadcb993729d2a80ba1634ac187af3d658c9cfe08d99: Status 404 returned error can't find the container with id 15aa103cf8f19e5bcdcabadcb993729d2a80ba1634ac187af3d658c9cfe08d99 Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.500984 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Jan 27 18:25:29 crc kubenswrapper[5049]: W0127 18:25:29.502328 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bd16677_2b72_4120_b689_ce563651bfe9.slice/crio-6c11101e292856401e926a12c1c86b769f4c85d489f7a0de02cdb8d3881a88d7 WatchSource:0}: Error finding container 6c11101e292856401e926a12c1c86b769f4c85d489f7a0de02cdb8d3881a88d7: Status 404 returned error can't find the container with id 6c11101e292856401e926a12c1c86b769f4c85d489f7a0de02cdb8d3881a88d7 Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.613369 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Jan 27 18:25:29 crc kubenswrapper[5049]: W0127 18:25:29.627989 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d2aff60_fc81_4c03_8ad4_6555e3a3a41d.slice/crio-ad8d4bf228f2c34e59429c0ccae2beb40274fc0d3d95a039e8009d573474a599 WatchSource:0}: Error finding container ad8d4bf228f2c34e59429c0ccae2beb40274fc0d3d95a039e8009d573474a599: Status 404 returned error can't find the container with id ad8d4bf228f2c34e59429c0ccae2beb40274fc0d3d95a039e8009d573474a599 Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.894937 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f05b7e7d-147f-44c9-a1a0-3bf20a581668","Type":"ContainerStarted","Data":"6bfeca662014f838a46ce62766b566df564fe95f386556cd8fdf5c1d50e1603c"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.895193 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f05b7e7d-147f-44c9-a1a0-3bf20a581668","Type":"ContainerStarted","Data":"f4a3b43b98720e1665cf9f90e3b0cd43d10a3568a81b6f04c54eaacfc67b6359"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.895204 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f05b7e7d-147f-44c9-a1a0-3bf20a581668","Type":"ContainerStarted","Data":"15aa103cf8f19e5bcdcabadcb993729d2a80ba1634ac187af3d658c9cfe08d99"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.899220 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ae4f99ce-4541-4d85-a1a6-64a8295dbd37","Type":"ContainerStarted","Data":"938a87b2aa8e11f1d5f2be45c705c8223cc2d5ef3f056ac2e9151023e5b39b23"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.907803 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8bd16677-2b72-4120-b689-ce563651bfe9","Type":"ContainerStarted","Data":"3c92d8a83e9dd2a0d9d8562618fc3ba44419d8fae14441a8165197ea2c39a28b"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.907842 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8bd16677-2b72-4120-b689-ce563651bfe9","Type":"ContainerStarted","Data":"a2fd060f2f1da5aefdbff7c5d5ddb48634b27163764c6b0a9e59de3ef62c7611"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.907852 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"8bd16677-2b72-4120-b689-ce563651bfe9","Type":"ContainerStarted","Data":"6c11101e292856401e926a12c1c86b769f4c85d489f7a0de02cdb8d3881a88d7"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.915169 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.91515217 podStartE2EDuration="3.91515217s" podCreationTimestamp="2026-01-27 18:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:29.909951644 +0000 UTC m=+5305.008925193" watchObservedRunningTime="2026-01-27 18:25:29.91515217 +0000 UTC m=+5305.014125719" Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.922691 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d","Type":"ContainerStarted","Data":"5c6a25b9727a3cbd7916e1a56145b30288c68386e47920c00b136063d5414a04"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.922728 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d","Type":"ContainerStarted","Data":"ad8d4bf228f2c34e59429c0ccae2beb40274fc0d3d95a039e8009d573474a599"} Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.930281 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=3.930261775 podStartE2EDuration="3.930261775s" podCreationTimestamp="2026-01-27 18:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:29.925704577 +0000 UTC m=+5305.024678136" watchObservedRunningTime="2026-01-27 18:25:29.930261775 +0000 UTC m=+5305.029235324" Jan 27 18:25:29 crc kubenswrapper[5049]: I0127 18:25:29.949010 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.948990453 podStartE2EDuration="3.948990453s" podCreationTimestamp="2026-01-27 18:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:29.940551435 +0000 UTC m=+5305.039524984" watchObservedRunningTime="2026-01-27 18:25:29.948990453 +0000 UTC m=+5305.047964002" Jan 27 18:25:30 crc kubenswrapper[5049]: I0127 18:25:30.850954 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:30 crc kubenswrapper[5049]: I0127 18:25:30.862636 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:30 crc kubenswrapper[5049]: I0127 18:25:30.877850 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:30 crc kubenswrapper[5049]: I0127 18:25:30.937718 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"6d2aff60-fc81-4c03-8ad4-6555e3a3a41d","Type":"ContainerStarted","Data":"8727430f12083309a8e1d2398a77509c7129c8a8356707b5e899e9841ee7f787"} Jan 27 18:25:30 crc kubenswrapper[5049]: I0127 18:25:30.955264 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:30 crc kubenswrapper[5049]: I0127 18:25:30.977525 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.977495071 podStartE2EDuration="4.977495071s" podCreationTimestamp="2026-01-27 18:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:30.967592543 +0000 UTC m=+5306.066566132" watchObservedRunningTime="2026-01-27 18:25:30.977495071 +0000 UTC m=+5306.076468660" Jan 27 18:25:30 crc kubenswrapper[5049]: I0127 18:25:30.987735 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:30 crc kubenswrapper[5049]: I0127 18:25:30.993039 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:32 crc kubenswrapper[5049]: I0127 18:25:32.645970 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:25:32 crc kubenswrapper[5049]: E0127 18:25:32.646463 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:25:32 crc kubenswrapper[5049]: I0127 18:25:32.851106 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:32 crc kubenswrapper[5049]: I0127 18:25:32.862612 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:32 crc kubenswrapper[5049]: I0127 18:25:32.877302 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:32 crc kubenswrapper[5049]: I0127 18:25:32.956195 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:32 crc kubenswrapper[5049]: I0127 18:25:32.987825 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:32 crc kubenswrapper[5049]: I0127 18:25:32.993588 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:33 crc kubenswrapper[5049]: I0127 18:25:33.907224 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:33 crc kubenswrapper[5049]: I0127 18:25:33.917629 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:33 crc kubenswrapper[5049]: I0127 18:25:33.923325 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:33 crc kubenswrapper[5049]: I0127 18:25:33.980623 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Jan 27 18:25:33 crc kubenswrapper[5049]: I0127 18:25:33.981024 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.000273 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.064347 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.124541 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.124847 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.191383 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.211021 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5db5ff4945-rz6bv"] Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.214731 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.220434 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.237548 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db5ff4945-rz6bv"] Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.339351 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-config\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.339526 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpvgn\" (UniqueName: \"kubernetes.io/projected/25105126-62e3-4166-81aa-c2f933530097-kube-api-access-zpvgn\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.339590 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-dns-svc\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.339829 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-ovsdbserver-nb\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.441020 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-config\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.441132 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpvgn\" (UniqueName: \"kubernetes.io/projected/25105126-62e3-4166-81aa-c2f933530097-kube-api-access-zpvgn\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.441163 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-dns-svc\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.441233 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-ovsdbserver-nb\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.442505 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-ovsdbserver-nb\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.442587 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-dns-svc\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.442660 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-config\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.462433 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpvgn\" (UniqueName: \"kubernetes.io/projected/25105126-62e3-4166-81aa-c2f933530097-kube-api-access-zpvgn\") pod \"dnsmasq-dns-5db5ff4945-rz6bv\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.547242 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.611506 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db5ff4945-rz6bv"] Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.663826 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f5b99fb57-926hv"] Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.665697 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.668864 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.673254 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f5b99fb57-926hv"] Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.748806 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-dns-svc\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.749356 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-config\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.749394 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc7n2\" (UniqueName: \"kubernetes.io/projected/23e8940e-bf1e-446b-9a25-d76674e9a6c9-kube-api-access-sc7n2\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.749417 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.749511 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.826583 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db5ff4945-rz6bv"] Jan 27 18:25:34 crc kubenswrapper[5049]: W0127 18:25:34.834252 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25105126_62e3_4166_81aa_c2f933530097.slice/crio-53b521871aaf19d0bc9c93e2b15e8e076e0d7e3ab7e5e43e00326948ae8198ec WatchSource:0}: Error finding container 53b521871aaf19d0bc9c93e2b15e8e076e0d7e3ab7e5e43e00326948ae8198ec: Status 404 returned error can't find the container with id 53b521871aaf19d0bc9c93e2b15e8e076e0d7e3ab7e5e43e00326948ae8198ec Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.850696 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc7n2\" (UniqueName: \"kubernetes.io/projected/23e8940e-bf1e-446b-9a25-d76674e9a6c9-kube-api-access-sc7n2\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.850738 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.850810 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.850867 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-dns-svc\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.850892 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-config\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.851687 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-config\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.851788 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-nb\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.851816 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-sb\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.852033 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-dns-svc\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.866061 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc7n2\" (UniqueName: \"kubernetes.io/projected/23e8940e-bf1e-446b-9a25-d76674e9a6c9-kube-api-access-sc7n2\") pod \"dnsmasq-dns-6f5b99fb57-926hv\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.971810 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" event={"ID":"25105126-62e3-4166-81aa-c2f933530097","Type":"ContainerStarted","Data":"53b521871aaf19d0bc9c93e2b15e8e076e0d7e3ab7e5e43e00326948ae8198ec"} Jan 27 18:25:34 crc kubenswrapper[5049]: I0127 18:25:34.985538 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:35 crc kubenswrapper[5049]: I0127 18:25:35.016481 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Jan 27 18:25:35 crc kubenswrapper[5049]: I0127 18:25:35.295062 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f5b99fb57-926hv"] Jan 27 18:25:35 crc kubenswrapper[5049]: W0127 18:25:35.299583 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23e8940e_bf1e_446b_9a25_d76674e9a6c9.slice/crio-cb6bf48afdce595e8d9d5e7f6689998610c3015d6b9d84fdf07d72155c97b052 WatchSource:0}: Error finding container cb6bf48afdce595e8d9d5e7f6689998610c3015d6b9d84fdf07d72155c97b052: Status 404 returned error can't find the container with id cb6bf48afdce595e8d9d5e7f6689998610c3015d6b9d84fdf07d72155c97b052 Jan 27 18:25:35 crc kubenswrapper[5049]: I0127 18:25:35.983612 5049 generic.go:334] "Generic (PLEG): container finished" podID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" containerID="20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391" exitCode=0 Jan 27 18:25:35 crc kubenswrapper[5049]: I0127 18:25:35.983868 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" event={"ID":"23e8940e-bf1e-446b-9a25-d76674e9a6c9","Type":"ContainerDied","Data":"20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391"} Jan 27 18:25:35 crc kubenswrapper[5049]: I0127 18:25:35.984205 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" event={"ID":"23e8940e-bf1e-446b-9a25-d76674e9a6c9","Type":"ContainerStarted","Data":"cb6bf48afdce595e8d9d5e7f6689998610c3015d6b9d84fdf07d72155c97b052"} Jan 27 18:25:35 crc kubenswrapper[5049]: I0127 18:25:35.986988 5049 generic.go:334] "Generic (PLEG): container finished" podID="25105126-62e3-4166-81aa-c2f933530097" containerID="85a9b7b38cfb4b362fb2b779e51593bf8505511d272cef96ae8aedea2cc194f6" exitCode=0 Jan 27 18:25:35 crc kubenswrapper[5049]: I0127 18:25:35.987090 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" event={"ID":"25105126-62e3-4166-81aa-c2f933530097","Type":"ContainerDied","Data":"85a9b7b38cfb4b362fb2b779e51593bf8505511d272cef96ae8aedea2cc194f6"} Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.272379 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.275787 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-ovsdbserver-nb\") pod \"25105126-62e3-4166-81aa-c2f933530097\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.275867 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-config\") pod \"25105126-62e3-4166-81aa-c2f933530097\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.275927 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpvgn\" (UniqueName: \"kubernetes.io/projected/25105126-62e3-4166-81aa-c2f933530097-kube-api-access-zpvgn\") pod \"25105126-62e3-4166-81aa-c2f933530097\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.275959 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-dns-svc\") pod \"25105126-62e3-4166-81aa-c2f933530097\" (UID: \"25105126-62e3-4166-81aa-c2f933530097\") " Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.279928 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25105126-62e3-4166-81aa-c2f933530097-kube-api-access-zpvgn" (OuterVolumeSpecName: "kube-api-access-zpvgn") pod "25105126-62e3-4166-81aa-c2f933530097" (UID: "25105126-62e3-4166-81aa-c2f933530097"). InnerVolumeSpecName "kube-api-access-zpvgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.298420 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25105126-62e3-4166-81aa-c2f933530097" (UID: "25105126-62e3-4166-81aa-c2f933530097"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.301092 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-config" (OuterVolumeSpecName: "config") pod "25105126-62e3-4166-81aa-c2f933530097" (UID: "25105126-62e3-4166-81aa-c2f933530097"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.301772 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25105126-62e3-4166-81aa-c2f933530097" (UID: "25105126-62e3-4166-81aa-c2f933530097"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.377154 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.377183 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpvgn\" (UniqueName: \"kubernetes.io/projected/25105126-62e3-4166-81aa-c2f933530097-kube-api-access-zpvgn\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.377193 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:36 crc kubenswrapper[5049]: I0127 18:25:36.377202 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25105126-62e3-4166-81aa-c2f933530097-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.009282 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" event={"ID":"25105126-62e3-4166-81aa-c2f933530097","Type":"ContainerDied","Data":"53b521871aaf19d0bc9c93e2b15e8e076e0d7e3ab7e5e43e00326948ae8198ec"} Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.009842 5049 scope.go:117] "RemoveContainer" containerID="85a9b7b38cfb4b362fb2b779e51593bf8505511d272cef96ae8aedea2cc194f6" Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.009333 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db5ff4945-rz6bv" Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.016934 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" event={"ID":"23e8940e-bf1e-446b-9a25-d76674e9a6c9","Type":"ContainerStarted","Data":"67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3"} Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.017535 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.046528 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" podStartSLOduration=3.046512361 podStartE2EDuration="3.046512361s" podCreationTimestamp="2026-01-27 18:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:37.04081915 +0000 UTC m=+5312.139792699" watchObservedRunningTime="2026-01-27 18:25:37.046512361 +0000 UTC m=+5312.145485910" Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.127563 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db5ff4945-rz6bv"] Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.141983 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5db5ff4945-rz6bv"] Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.658597 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25105126-62e3-4166-81aa-c2f933530097" path="/var/lib/kubelet/pods/25105126-62e3-4166-81aa-c2f933530097/volumes" Jan 27 18:25:37 crc kubenswrapper[5049]: I0127 18:25:37.930986 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.850202 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Jan 27 18:25:40 crc kubenswrapper[5049]: E0127 18:25:40.853570 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25105126-62e3-4166-81aa-c2f933530097" containerName="init" Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.853841 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="25105126-62e3-4166-81aa-c2f933530097" containerName="init" Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.854363 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="25105126-62e3-4166-81aa-c2f933530097" containerName="init" Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.855904 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.874377 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.892177 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.971005 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3c07f615-4864-4724-b449-b5c91c539778-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.971123 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fe0a249f-48f1-4ac4-a264-e4eae1834230\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe0a249f-48f1-4ac4-a264-e4eae1834230\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:40 crc kubenswrapper[5049]: I0127 18:25:40.971234 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhpk\" (UniqueName: \"kubernetes.io/projected/3c07f615-4864-4724-b449-b5c91c539778-kube-api-access-pdhpk\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.072691 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdhpk\" (UniqueName: \"kubernetes.io/projected/3c07f615-4864-4724-b449-b5c91c539778-kube-api-access-pdhpk\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.073068 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3c07f615-4864-4724-b449-b5c91c539778-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.073243 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fe0a249f-48f1-4ac4-a264-e4eae1834230\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe0a249f-48f1-4ac4-a264-e4eae1834230\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.077171 5049 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.077317 5049 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fe0a249f-48f1-4ac4-a264-e4eae1834230\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe0a249f-48f1-4ac4-a264-e4eae1834230\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6784510919baf1807bb49e7219b95ce1553a5ced7534de385320cb9329646fca/globalmount\"" pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.080847 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/3c07f615-4864-4724-b449-b5c91c539778-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.097404 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdhpk\" (UniqueName: \"kubernetes.io/projected/3c07f615-4864-4724-b449-b5c91c539778-kube-api-access-pdhpk\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.108817 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fe0a249f-48f1-4ac4-a264-e4eae1834230\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe0a249f-48f1-4ac4-a264-e4eae1834230\") pod \"ovn-copy-data\" (UID: \"3c07f615-4864-4724-b449-b5c91c539778\") " pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.208054 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Jan 27 18:25:41 crc kubenswrapper[5049]: I0127 18:25:41.768307 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Jan 27 18:25:41 crc kubenswrapper[5049]: W0127 18:25:41.772929 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c07f615_4864_4724_b449_b5c91c539778.slice/crio-a6cf97eb2d53504c2e4577e2622e9aa641afb0a0a5bb38b7ef6f05ae5e689068 WatchSource:0}: Error finding container a6cf97eb2d53504c2e4577e2622e9aa641afb0a0a5bb38b7ef6f05ae5e689068: Status 404 returned error can't find the container with id a6cf97eb2d53504c2e4577e2622e9aa641afb0a0a5bb38b7ef6f05ae5e689068 Jan 27 18:25:42 crc kubenswrapper[5049]: I0127 18:25:42.061093 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3c07f615-4864-4724-b449-b5c91c539778","Type":"ContainerStarted","Data":"4031042e34e9a1255b47a1bfbd016d34bbe20b59cbaee1765349e97aeb019771"} Jan 27 18:25:42 crc kubenswrapper[5049]: I0127 18:25:42.061145 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"3c07f615-4864-4724-b449-b5c91c539778","Type":"ContainerStarted","Data":"a6cf97eb2d53504c2e4577e2622e9aa641afb0a0a5bb38b7ef6f05ae5e689068"} Jan 27 18:25:42 crc kubenswrapper[5049]: I0127 18:25:42.077076 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.077059062 podStartE2EDuration="3.077059062s" podCreationTimestamp="2026-01-27 18:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:42.076266549 +0000 UTC m=+5317.175240138" watchObservedRunningTime="2026-01-27 18:25:42.077059062 +0000 UTC m=+5317.176032631" Jan 27 18:25:44 crc kubenswrapper[5049]: I0127 18:25:44.987999 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.056886 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lkkzt"] Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.058714 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" podUID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" containerName="dnsmasq-dns" containerID="cri-o://882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf" gracePeriod=10 Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.516833 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.649290 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8qrj\" (UniqueName: \"kubernetes.io/projected/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-kube-api-access-k8qrj\") pod \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.649395 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-dns-svc\") pod \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.649426 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-config\") pod \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\" (UID: \"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3\") " Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.663536 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-kube-api-access-k8qrj" (OuterVolumeSpecName: "kube-api-access-k8qrj") pod "a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" (UID: "a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3"). InnerVolumeSpecName "kube-api-access-k8qrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.709949 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-config" (OuterVolumeSpecName: "config") pod "a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" (UID: "a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.714770 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" (UID: "a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.751997 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8qrj\" (UniqueName: \"kubernetes.io/projected/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-kube-api-access-k8qrj\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.752037 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:45 crc kubenswrapper[5049]: I0127 18:25:45.752050 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.111849 5049 generic.go:334] "Generic (PLEG): container finished" podID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" containerID="882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf" exitCode=0 Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.111914 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" event={"ID":"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3","Type":"ContainerDied","Data":"882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf"} Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.112007 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" event={"ID":"a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3","Type":"ContainerDied","Data":"93e46dc034a3ad21f31d5c9e99f43cbd33bb0d5f28fd6014e842dbf91ac0b43d"} Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.112040 5049 scope.go:117] "RemoveContainer" containerID="882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf" Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.111936 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-lkkzt" Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.133360 5049 scope.go:117] "RemoveContainer" containerID="21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288" Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.160217 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lkkzt"] Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.172539 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-lkkzt"] Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.184888 5049 scope.go:117] "RemoveContainer" containerID="882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf" Jan 27 18:25:46 crc kubenswrapper[5049]: E0127 18:25:46.185526 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf\": container with ID starting with 882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf not found: ID does not exist" containerID="882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf" Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.185593 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf"} err="failed to get container status \"882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf\": rpc error: code = NotFound desc = could not find container \"882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf\": container with ID starting with 882ba566feda02029bd733dc7145cd37495c90b4222c38075ccd23b187404abf not found: ID does not exist" Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.185635 5049 scope.go:117] "RemoveContainer" containerID="21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288" Jan 27 18:25:46 crc kubenswrapper[5049]: E0127 18:25:46.186193 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288\": container with ID starting with 21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288 not found: ID does not exist" containerID="21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288" Jan 27 18:25:46 crc kubenswrapper[5049]: I0127 18:25:46.186262 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288"} err="failed to get container status \"21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288\": rpc error: code = NotFound desc = could not find container \"21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288\": container with ID starting with 21676ab9d06b522cb38538721accc89f088519d76951a999baed4e9bd05fd288 not found: ID does not exist" Jan 27 18:25:47 crc kubenswrapper[5049]: I0127 18:25:47.647250 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:25:47 crc kubenswrapper[5049]: E0127 18:25:47.647741 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:25:47 crc kubenswrapper[5049]: I0127 18:25:47.662599 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" path="/var/lib/kubelet/pods/a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3/volumes" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.049767 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:25:48 crc kubenswrapper[5049]: E0127 18:25:48.050438 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" containerName="dnsmasq-dns" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.050458 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" containerName="dnsmasq-dns" Jan 27 18:25:48 crc kubenswrapper[5049]: E0127 18:25:48.050489 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" containerName="init" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.050500 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" containerName="init" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.050726 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c70c1f-93c2-48ba-a67b-c5519a3e0ea3" containerName="dnsmasq-dns" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.058894 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.063350 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-f6nqz" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.063666 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.063941 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.097633 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.195940 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-scripts\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.196007 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-config\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.196026 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.196076 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.196115 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn24c\" (UniqueName: \"kubernetes.io/projected/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-kube-api-access-qn24c\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.297858 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-scripts\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.297935 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-config\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.297963 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.298032 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.298086 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn24c\" (UniqueName: \"kubernetes.io/projected/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-kube-api-access-qn24c\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.298988 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.299018 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-scripts\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.299057 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-config\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.315632 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.318822 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn24c\" (UniqueName: \"kubernetes.io/projected/4b3f72b8-e2df-430e-b81c-b2d59bf6b022-kube-api-access-qn24c\") pod \"ovn-northd-0\" (UID: \"4b3f72b8-e2df-430e-b81c-b2d59bf6b022\") " pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.387325 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 18:25:48 crc kubenswrapper[5049]: I0127 18:25:48.848846 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 18:25:49 crc kubenswrapper[5049]: I0127 18:25:49.140947 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4b3f72b8-e2df-430e-b81c-b2d59bf6b022","Type":"ContainerStarted","Data":"a1ef940ac5c4629ac4cc122a629aeac8d29833725c5063f38747540b209c2cb9"} Jan 27 18:25:49 crc kubenswrapper[5049]: I0127 18:25:49.141449 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4b3f72b8-e2df-430e-b81c-b2d59bf6b022","Type":"ContainerStarted","Data":"a3bbc43426ce224458a2d24d0fcc2173059b41e1e6194f42578af9e41ab0599b"} Jan 27 18:25:49 crc kubenswrapper[5049]: E0127 18:25:49.226964 5049 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.20:43186->38.102.83.20:40557: write tcp 38.102.83.20:43186->38.102.83.20:40557: write: broken pipe Jan 27 18:25:50 crc kubenswrapper[5049]: I0127 18:25:50.151178 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4b3f72b8-e2df-430e-b81c-b2d59bf6b022","Type":"ContainerStarted","Data":"f99b93409086799bc70d13e7513e19c89423bf7d4d53026db88c582f8d4ca2d2"} Jan 27 18:25:50 crc kubenswrapper[5049]: I0127 18:25:50.152383 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 18:25:50 crc kubenswrapper[5049]: I0127 18:25:50.172298 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.172275241 podStartE2EDuration="2.172275241s" podCreationTimestamp="2026-01-27 18:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:50.172254701 +0000 UTC m=+5325.271228250" watchObservedRunningTime="2026-01-27 18:25:50.172275241 +0000 UTC m=+5325.271248780" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.726560 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lfrws"] Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.728896 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.745349 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfrws"] Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.864079 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd4pl\" (UniqueName: \"kubernetes.io/projected/9207912b-b3a3-4915-8191-3f2783ab5a8d-kube-api-access-qd4pl\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.864163 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-catalog-content\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.864264 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-utilities\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.966321 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-utilities\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.966422 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd4pl\" (UniqueName: \"kubernetes.io/projected/9207912b-b3a3-4915-8191-3f2783ab5a8d-kube-api-access-qd4pl\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.966482 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-catalog-content\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.967028 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-catalog-content\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.967332 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-utilities\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:51 crc kubenswrapper[5049]: I0127 18:25:51.989406 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd4pl\" (UniqueName: \"kubernetes.io/projected/9207912b-b3a3-4915-8191-3f2783ab5a8d-kube-api-access-qd4pl\") pod \"redhat-operators-lfrws\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:52 crc kubenswrapper[5049]: I0127 18:25:52.060948 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:25:52 crc kubenswrapper[5049]: I0127 18:25:52.497281 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfrws"] Jan 27 18:25:52 crc kubenswrapper[5049]: W0127 18:25:52.499207 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9207912b_b3a3_4915_8191_3f2783ab5a8d.slice/crio-b8a2fb3cc70ae6c74cd4626ef5eca0bb25a3b952f10b97f2e23a059bee84abc6 WatchSource:0}: Error finding container b8a2fb3cc70ae6c74cd4626ef5eca0bb25a3b952f10b97f2e23a059bee84abc6: Status 404 returned error can't find the container with id b8a2fb3cc70ae6c74cd4626ef5eca0bb25a3b952f10b97f2e23a059bee84abc6 Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.116312 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-hmnhj"] Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.117530 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.125807 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hmnhj"] Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.178383 5049 generic.go:334] "Generic (PLEG): container finished" podID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerID="c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22" exitCode=0 Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.178436 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfrws" event={"ID":"9207912b-b3a3-4915-8191-3f2783ab5a8d","Type":"ContainerDied","Data":"c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22"} Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.178479 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfrws" event={"ID":"9207912b-b3a3-4915-8191-3f2783ab5a8d","Type":"ContainerStarted","Data":"b8a2fb3cc70ae6c74cd4626ef5eca0bb25a3b952f10b97f2e23a059bee84abc6"} Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.180157 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.186748 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqbd\" (UniqueName: \"kubernetes.io/projected/51e4e3b8-37e1-45ab-ba2f-d9e426926055-kube-api-access-njqbd\") pod \"keystone-db-create-hmnhj\" (UID: \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\") " pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.186929 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e4e3b8-37e1-45ab-ba2f-d9e426926055-operator-scripts\") pod \"keystone-db-create-hmnhj\" (UID: \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\") " pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.223696 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7eec-account-create-update-xbn8m"] Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.224910 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.227440 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.236807 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7eec-account-create-update-xbn8m"] Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.288667 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56a71484-f8e0-4b87-91a8-d1e16dd46958-operator-scripts\") pod \"keystone-7eec-account-create-update-xbn8m\" (UID: \"56a71484-f8e0-4b87-91a8-d1e16dd46958\") " pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.288771 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e4e3b8-37e1-45ab-ba2f-d9e426926055-operator-scripts\") pod \"keystone-db-create-hmnhj\" (UID: \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\") " pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.288867 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztlk\" (UniqueName: \"kubernetes.io/projected/56a71484-f8e0-4b87-91a8-d1e16dd46958-kube-api-access-sztlk\") pod \"keystone-7eec-account-create-update-xbn8m\" (UID: \"56a71484-f8e0-4b87-91a8-d1e16dd46958\") " pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.288897 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqbd\" (UniqueName: \"kubernetes.io/projected/51e4e3b8-37e1-45ab-ba2f-d9e426926055-kube-api-access-njqbd\") pod \"keystone-db-create-hmnhj\" (UID: \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\") " pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.289553 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e4e3b8-37e1-45ab-ba2f-d9e426926055-operator-scripts\") pod \"keystone-db-create-hmnhj\" (UID: \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\") " pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.308430 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqbd\" (UniqueName: \"kubernetes.io/projected/51e4e3b8-37e1-45ab-ba2f-d9e426926055-kube-api-access-njqbd\") pod \"keystone-db-create-hmnhj\" (UID: \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\") " pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.394729 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56a71484-f8e0-4b87-91a8-d1e16dd46958-operator-scripts\") pod \"keystone-7eec-account-create-update-xbn8m\" (UID: \"56a71484-f8e0-4b87-91a8-d1e16dd46958\") " pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.394980 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sztlk\" (UniqueName: \"kubernetes.io/projected/56a71484-f8e0-4b87-91a8-d1e16dd46958-kube-api-access-sztlk\") pod \"keystone-7eec-account-create-update-xbn8m\" (UID: \"56a71484-f8e0-4b87-91a8-d1e16dd46958\") " pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.395568 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56a71484-f8e0-4b87-91a8-d1e16dd46958-operator-scripts\") pod \"keystone-7eec-account-create-update-xbn8m\" (UID: \"56a71484-f8e0-4b87-91a8-d1e16dd46958\") " pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.410606 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sztlk\" (UniqueName: \"kubernetes.io/projected/56a71484-f8e0-4b87-91a8-d1e16dd46958-kube-api-access-sztlk\") pod \"keystone-7eec-account-create-update-xbn8m\" (UID: \"56a71484-f8e0-4b87-91a8-d1e16dd46958\") " pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.440509 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.545119 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:53 crc kubenswrapper[5049]: W0127 18:25:53.899156 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51e4e3b8_37e1_45ab_ba2f_d9e426926055.slice/crio-2da80889c9d0f8b955edf030cf20c8ed0b60bbbe719fb526031ab02e9488c8f1 WatchSource:0}: Error finding container 2da80889c9d0f8b955edf030cf20c8ed0b60bbbe719fb526031ab02e9488c8f1: Status 404 returned error can't find the container with id 2da80889c9d0f8b955edf030cf20c8ed0b60bbbe719fb526031ab02e9488c8f1 Jan 27 18:25:53 crc kubenswrapper[5049]: I0127 18:25:53.900406 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-hmnhj"] Jan 27 18:25:54 crc kubenswrapper[5049]: I0127 18:25:54.000582 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7eec-account-create-update-xbn8m"] Jan 27 18:25:54 crc kubenswrapper[5049]: W0127 18:25:54.005167 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56a71484_f8e0_4b87_91a8_d1e16dd46958.slice/crio-fcfd604ed972eacd2ea4e08a8e736b8aaa1fad059e1623ce515c73cbf85b8bc5 WatchSource:0}: Error finding container fcfd604ed972eacd2ea4e08a8e736b8aaa1fad059e1623ce515c73cbf85b8bc5: Status 404 returned error can't find the container with id fcfd604ed972eacd2ea4e08a8e736b8aaa1fad059e1623ce515c73cbf85b8bc5 Jan 27 18:25:54 crc kubenswrapper[5049]: I0127 18:25:54.187326 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7eec-account-create-update-xbn8m" event={"ID":"56a71484-f8e0-4b87-91a8-d1e16dd46958","Type":"ContainerStarted","Data":"671ad2ae68d44bdbaedad93b3c24adbbb9cf2d0f7b7a86a14d12c01aa15dd0ac"} Jan 27 18:25:54 crc kubenswrapper[5049]: I0127 18:25:54.187701 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7eec-account-create-update-xbn8m" event={"ID":"56a71484-f8e0-4b87-91a8-d1e16dd46958","Type":"ContainerStarted","Data":"fcfd604ed972eacd2ea4e08a8e736b8aaa1fad059e1623ce515c73cbf85b8bc5"} Jan 27 18:25:54 crc kubenswrapper[5049]: I0127 18:25:54.191065 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hmnhj" event={"ID":"51e4e3b8-37e1-45ab-ba2f-d9e426926055","Type":"ContainerStarted","Data":"8a2311d36b4d5270800ca17ed852c09554837c96e98626c1f133e3ad58896d3f"} Jan 27 18:25:54 crc kubenswrapper[5049]: I0127 18:25:54.191116 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hmnhj" event={"ID":"51e4e3b8-37e1-45ab-ba2f-d9e426926055","Type":"ContainerStarted","Data":"2da80889c9d0f8b955edf030cf20c8ed0b60bbbe719fb526031ab02e9488c8f1"} Jan 27 18:25:54 crc kubenswrapper[5049]: I0127 18:25:54.196342 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfrws" event={"ID":"9207912b-b3a3-4915-8191-3f2783ab5a8d","Type":"ContainerStarted","Data":"93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f"} Jan 27 18:25:54 crc kubenswrapper[5049]: I0127 18:25:54.211392 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7eec-account-create-update-xbn8m" podStartSLOduration=1.211376956 podStartE2EDuration="1.211376956s" podCreationTimestamp="2026-01-27 18:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:54.204851853 +0000 UTC m=+5329.303825422" watchObservedRunningTime="2026-01-27 18:25:54.211376956 +0000 UTC m=+5329.310350505" Jan 27 18:25:54 crc kubenswrapper[5049]: I0127 18:25:54.221460 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-hmnhj" podStartSLOduration=1.22144252 podStartE2EDuration="1.22144252s" podCreationTimestamp="2026-01-27 18:25:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:25:54.218780505 +0000 UTC m=+5329.317754074" watchObservedRunningTime="2026-01-27 18:25:54.22144252 +0000 UTC m=+5329.320416079" Jan 27 18:25:55 crc kubenswrapper[5049]: I0127 18:25:55.214225 5049 generic.go:334] "Generic (PLEG): container finished" podID="56a71484-f8e0-4b87-91a8-d1e16dd46958" containerID="671ad2ae68d44bdbaedad93b3c24adbbb9cf2d0f7b7a86a14d12c01aa15dd0ac" exitCode=0 Jan 27 18:25:55 crc kubenswrapper[5049]: I0127 18:25:55.214495 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7eec-account-create-update-xbn8m" event={"ID":"56a71484-f8e0-4b87-91a8-d1e16dd46958","Type":"ContainerDied","Data":"671ad2ae68d44bdbaedad93b3c24adbbb9cf2d0f7b7a86a14d12c01aa15dd0ac"} Jan 27 18:25:55 crc kubenswrapper[5049]: I0127 18:25:55.219230 5049 generic.go:334] "Generic (PLEG): container finished" podID="51e4e3b8-37e1-45ab-ba2f-d9e426926055" containerID="8a2311d36b4d5270800ca17ed852c09554837c96e98626c1f133e3ad58896d3f" exitCode=0 Jan 27 18:25:55 crc kubenswrapper[5049]: I0127 18:25:55.219424 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hmnhj" event={"ID":"51e4e3b8-37e1-45ab-ba2f-d9e426926055","Type":"ContainerDied","Data":"8a2311d36b4d5270800ca17ed852c09554837c96e98626c1f133e3ad58896d3f"} Jan 27 18:25:55 crc kubenswrapper[5049]: I0127 18:25:55.223113 5049 generic.go:334] "Generic (PLEG): container finished" podID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerID="93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f" exitCode=0 Jan 27 18:25:55 crc kubenswrapper[5049]: I0127 18:25:55.223197 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfrws" event={"ID":"9207912b-b3a3-4915-8191-3f2783ab5a8d","Type":"ContainerDied","Data":"93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f"} Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.232620 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfrws" event={"ID":"9207912b-b3a3-4915-8191-3f2783ab5a8d","Type":"ContainerStarted","Data":"28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423"} Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.262343 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lfrws" podStartSLOduration=2.765323249 podStartE2EDuration="5.262327824s" podCreationTimestamp="2026-01-27 18:25:51 +0000 UTC" firstStartedPulling="2026-01-27 18:25:53.179975957 +0000 UTC m=+5328.278949506" lastFinishedPulling="2026-01-27 18:25:55.676980522 +0000 UTC m=+5330.775954081" observedRunningTime="2026-01-27 18:25:56.260749609 +0000 UTC m=+5331.359723168" watchObservedRunningTime="2026-01-27 18:25:56.262327824 +0000 UTC m=+5331.361301373" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.689433 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.760175 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e4e3b8-37e1-45ab-ba2f-d9e426926055-operator-scripts\") pod \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\" (UID: \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\") " Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.760241 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njqbd\" (UniqueName: \"kubernetes.io/projected/51e4e3b8-37e1-45ab-ba2f-d9e426926055-kube-api-access-njqbd\") pod \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\" (UID: \"51e4e3b8-37e1-45ab-ba2f-d9e426926055\") " Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.763142 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e4e3b8-37e1-45ab-ba2f-d9e426926055-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51e4e3b8-37e1-45ab-ba2f-d9e426926055" (UID: "51e4e3b8-37e1-45ab-ba2f-d9e426926055"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.766690 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e4e3b8-37e1-45ab-ba2f-d9e426926055-kube-api-access-njqbd" (OuterVolumeSpecName: "kube-api-access-njqbd") pod "51e4e3b8-37e1-45ab-ba2f-d9e426926055" (UID: "51e4e3b8-37e1-45ab-ba2f-d9e426926055"). InnerVolumeSpecName "kube-api-access-njqbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.805869 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.862011 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56a71484-f8e0-4b87-91a8-d1e16dd46958-operator-scripts\") pod \"56a71484-f8e0-4b87-91a8-d1e16dd46958\" (UID: \"56a71484-f8e0-4b87-91a8-d1e16dd46958\") " Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.862210 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sztlk\" (UniqueName: \"kubernetes.io/projected/56a71484-f8e0-4b87-91a8-d1e16dd46958-kube-api-access-sztlk\") pod \"56a71484-f8e0-4b87-91a8-d1e16dd46958\" (UID: \"56a71484-f8e0-4b87-91a8-d1e16dd46958\") " Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.862565 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njqbd\" (UniqueName: \"kubernetes.io/projected/51e4e3b8-37e1-45ab-ba2f-d9e426926055-kube-api-access-njqbd\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.862585 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e4e3b8-37e1-45ab-ba2f-d9e426926055-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.862812 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56a71484-f8e0-4b87-91a8-d1e16dd46958-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56a71484-f8e0-4b87-91a8-d1e16dd46958" (UID: "56a71484-f8e0-4b87-91a8-d1e16dd46958"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.867871 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a71484-f8e0-4b87-91a8-d1e16dd46958-kube-api-access-sztlk" (OuterVolumeSpecName: "kube-api-access-sztlk") pod "56a71484-f8e0-4b87-91a8-d1e16dd46958" (UID: "56a71484-f8e0-4b87-91a8-d1e16dd46958"). InnerVolumeSpecName "kube-api-access-sztlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.964467 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sztlk\" (UniqueName: \"kubernetes.io/projected/56a71484-f8e0-4b87-91a8-d1e16dd46958-kube-api-access-sztlk\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:56 crc kubenswrapper[5049]: I0127 18:25:56.964500 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56a71484-f8e0-4b87-91a8-d1e16dd46958-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:25:57 crc kubenswrapper[5049]: I0127 18:25:57.241867 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-hmnhj" event={"ID":"51e4e3b8-37e1-45ab-ba2f-d9e426926055","Type":"ContainerDied","Data":"2da80889c9d0f8b955edf030cf20c8ed0b60bbbe719fb526031ab02e9488c8f1"} Jan 27 18:25:57 crc kubenswrapper[5049]: I0127 18:25:57.241940 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da80889c9d0f8b955edf030cf20c8ed0b60bbbe719fb526031ab02e9488c8f1" Jan 27 18:25:57 crc kubenswrapper[5049]: I0127 18:25:57.241975 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-hmnhj" Jan 27 18:25:57 crc kubenswrapper[5049]: I0127 18:25:57.245282 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7eec-account-create-update-xbn8m" event={"ID":"56a71484-f8e0-4b87-91a8-d1e16dd46958","Type":"ContainerDied","Data":"fcfd604ed972eacd2ea4e08a8e736b8aaa1fad059e1623ce515c73cbf85b8bc5"} Jan 27 18:25:57 crc kubenswrapper[5049]: I0127 18:25:57.245317 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7eec-account-create-update-xbn8m" Jan 27 18:25:57 crc kubenswrapper[5049]: I0127 18:25:57.245344 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcfd604ed972eacd2ea4e08a8e736b8aaa1fad059e1623ce515c73cbf85b8bc5" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.637877 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-z8f98"] Jan 27 18:25:58 crc kubenswrapper[5049]: E0127 18:25:58.638481 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a71484-f8e0-4b87-91a8-d1e16dd46958" containerName="mariadb-account-create-update" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.638492 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a71484-f8e0-4b87-91a8-d1e16dd46958" containerName="mariadb-account-create-update" Jan 27 18:25:58 crc kubenswrapper[5049]: E0127 18:25:58.638508 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e4e3b8-37e1-45ab-ba2f-d9e426926055" containerName="mariadb-database-create" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.638513 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e4e3b8-37e1-45ab-ba2f-d9e426926055" containerName="mariadb-database-create" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.638656 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a71484-f8e0-4b87-91a8-d1e16dd46958" containerName="mariadb-account-create-update" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.638744 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e4e3b8-37e1-45ab-ba2f-d9e426926055" containerName="mariadb-database-create" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.639299 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.641602 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.642529 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tncdc" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.642990 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.643247 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.690070 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z8f98"] Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.796735 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-combined-ca-bundle\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.796885 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k7pm\" (UniqueName: \"kubernetes.io/projected/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-kube-api-access-7k7pm\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.796909 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-config-data\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.898713 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k7pm\" (UniqueName: \"kubernetes.io/projected/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-kube-api-access-7k7pm\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.898981 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-config-data\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.899138 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-combined-ca-bundle\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.906091 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-combined-ca-bundle\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.906181 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-config-data\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.922490 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k7pm\" (UniqueName: \"kubernetes.io/projected/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-kube-api-access-7k7pm\") pod \"keystone-db-sync-z8f98\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:58 crc kubenswrapper[5049]: I0127 18:25:58.963956 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z8f98" Jan 27 18:25:59 crc kubenswrapper[5049]: I0127 18:25:59.410044 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z8f98"] Jan 27 18:26:00 crc kubenswrapper[5049]: I0127 18:26:00.267904 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z8f98" event={"ID":"9f86ad93-5ff6-419f-a10a-b88ce9d4706d","Type":"ContainerStarted","Data":"962c0f07ae1e48ac1399fff055cecfe27764c8a0fbe0b7ba1b9487d161cd843e"} Jan 27 18:26:00 crc kubenswrapper[5049]: I0127 18:26:00.267964 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z8f98" event={"ID":"9f86ad93-5ff6-419f-a10a-b88ce9d4706d","Type":"ContainerStarted","Data":"01dfacb5337194b092e02bafdf1b953223a50728a52f06f49d6df9c9f1dc9b12"} Jan 27 18:26:00 crc kubenswrapper[5049]: I0127 18:26:00.286859 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-z8f98" podStartSLOduration=2.286835178 podStartE2EDuration="2.286835178s" podCreationTimestamp="2026-01-27 18:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:00.286229281 +0000 UTC m=+5335.385202860" watchObservedRunningTime="2026-01-27 18:26:00.286835178 +0000 UTC m=+5335.385808737" Jan 27 18:26:01 crc kubenswrapper[5049]: I0127 18:26:01.278822 5049 generic.go:334] "Generic (PLEG): container finished" podID="9f86ad93-5ff6-419f-a10a-b88ce9d4706d" containerID="962c0f07ae1e48ac1399fff055cecfe27764c8a0fbe0b7ba1b9487d161cd843e" exitCode=0 Jan 27 18:26:01 crc kubenswrapper[5049]: I0127 18:26:01.278867 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z8f98" event={"ID":"9f86ad93-5ff6-419f-a10a-b88ce9d4706d","Type":"ContainerDied","Data":"962c0f07ae1e48ac1399fff055cecfe27764c8a0fbe0b7ba1b9487d161cd843e"} Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.062142 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.062561 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.616162 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z8f98" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.646238 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:26:02 crc kubenswrapper[5049]: E0127 18:26:02.646738 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.763488 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k7pm\" (UniqueName: \"kubernetes.io/projected/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-kube-api-access-7k7pm\") pod \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.764011 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-config-data\") pod \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.764178 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-combined-ca-bundle\") pod \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\" (UID: \"9f86ad93-5ff6-419f-a10a-b88ce9d4706d\") " Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.771695 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-kube-api-access-7k7pm" (OuterVolumeSpecName: "kube-api-access-7k7pm") pod "9f86ad93-5ff6-419f-a10a-b88ce9d4706d" (UID: "9f86ad93-5ff6-419f-a10a-b88ce9d4706d"). InnerVolumeSpecName "kube-api-access-7k7pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.789352 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f86ad93-5ff6-419f-a10a-b88ce9d4706d" (UID: "9f86ad93-5ff6-419f-a10a-b88ce9d4706d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.804127 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-config-data" (OuterVolumeSpecName: "config-data") pod "9f86ad93-5ff6-419f-a10a-b88ce9d4706d" (UID: "9f86ad93-5ff6-419f-a10a-b88ce9d4706d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.866644 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.866732 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k7pm\" (UniqueName: \"kubernetes.io/projected/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-kube-api-access-7k7pm\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:02 crc kubenswrapper[5049]: I0127 18:26:02.866760 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f86ad93-5ff6-419f-a10a-b88ce9d4706d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.101536 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lfrws" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="registry-server" probeResult="failure" output=< Jan 27 18:26:03 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:26:03 crc kubenswrapper[5049]: > Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.300061 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z8f98" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.300185 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z8f98" event={"ID":"9f86ad93-5ff6-419f-a10a-b88ce9d4706d","Type":"ContainerDied","Data":"01dfacb5337194b092e02bafdf1b953223a50728a52f06f49d6df9c9f1dc9b12"} Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.300348 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01dfacb5337194b092e02bafdf1b953223a50728a52f06f49d6df9c9f1dc9b12" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.526157 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9db86f55-6jp76"] Jan 27 18:26:03 crc kubenswrapper[5049]: E0127 18:26:03.526832 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f86ad93-5ff6-419f-a10a-b88ce9d4706d" containerName="keystone-db-sync" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.526927 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f86ad93-5ff6-419f-a10a-b88ce9d4706d" containerName="keystone-db-sync" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.527229 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f86ad93-5ff6-419f-a10a-b88ce9d4706d" containerName="keystone-db-sync" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.528335 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.550149 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9db86f55-6jp76"] Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.585122 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.585400 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-dns-svc\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.585506 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-config\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.585591 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.585881 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgllq\" (UniqueName: \"kubernetes.io/projected/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-kube-api-access-kgllq\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.590641 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8h4kg"] Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.591612 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.597792 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.597967 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.598070 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.598250 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.600149 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tncdc" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.604758 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8h4kg"] Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.686916 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-combined-ca-bundle\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.687585 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2m8b\" (UniqueName: \"kubernetes.io/projected/904b0c70-2d4b-4682-abee-dc3437b217af-kube-api-access-s2m8b\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.687699 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-credential-keys\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.687783 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-scripts\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.687868 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.687962 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-dns-svc\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.688161 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-config\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.688737 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.688841 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-config-data\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.688813 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.688917 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-config\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.689100 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgllq\" (UniqueName: \"kubernetes.io/projected/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-kube-api-access-kgllq\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.689228 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-fernet-keys\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.689459 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-dns-svc\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.689457 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.707084 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgllq\" (UniqueName: \"kubernetes.io/projected/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-kube-api-access-kgllq\") pod \"dnsmasq-dns-6b9db86f55-6jp76\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.791164 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-combined-ca-bundle\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.791707 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2m8b\" (UniqueName: \"kubernetes.io/projected/904b0c70-2d4b-4682-abee-dc3437b217af-kube-api-access-s2m8b\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.792204 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-credential-keys\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.792320 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-scripts\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.792741 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-config-data\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.793096 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-fernet-keys\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.794869 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-combined-ca-bundle\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.795179 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-scripts\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.795942 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-fernet-keys\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.796144 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-credential-keys\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.796519 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-config-data\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.811126 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2m8b\" (UniqueName: \"kubernetes.io/projected/904b0c70-2d4b-4682-abee-dc3437b217af-kube-api-access-s2m8b\") pod \"keystone-bootstrap-8h4kg\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.857085 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:03 crc kubenswrapper[5049]: I0127 18:26:03.911927 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:04 crc kubenswrapper[5049]: I0127 18:26:04.345537 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9db86f55-6jp76"] Jan 27 18:26:04 crc kubenswrapper[5049]: W0127 18:26:04.351413 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7fd3783_f1ee_44c0_b1ab_4023bf0c7d5a.slice/crio-4a5e4073459e33ba765dc06da6b9c62c30470cc03d3a9b314c8b1de89bca81d0 WatchSource:0}: Error finding container 4a5e4073459e33ba765dc06da6b9c62c30470cc03d3a9b314c8b1de89bca81d0: Status 404 returned error can't find the container with id 4a5e4073459e33ba765dc06da6b9c62c30470cc03d3a9b314c8b1de89bca81d0 Jan 27 18:26:04 crc kubenswrapper[5049]: I0127 18:26:04.479499 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8h4kg"] Jan 27 18:26:04 crc kubenswrapper[5049]: W0127 18:26:04.486927 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod904b0c70_2d4b_4682_abee_dc3437b217af.slice/crio-c4af6e6369d1cdb43a9065a2f4a955e7fe54ac0be43f24f864696fd83ad18d7a WatchSource:0}: Error finding container c4af6e6369d1cdb43a9065a2f4a955e7fe54ac0be43f24f864696fd83ad18d7a: Status 404 returned error can't find the container with id c4af6e6369d1cdb43a9065a2f4a955e7fe54ac0be43f24f864696fd83ad18d7a Jan 27 18:26:05 crc kubenswrapper[5049]: I0127 18:26:05.318857 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8h4kg" event={"ID":"904b0c70-2d4b-4682-abee-dc3437b217af","Type":"ContainerStarted","Data":"406935adc235719e0dbb011b271ee7dc773a95df6b6bd343f3fd77f363491951"} Jan 27 18:26:05 crc kubenswrapper[5049]: I0127 18:26:05.318911 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8h4kg" event={"ID":"904b0c70-2d4b-4682-abee-dc3437b217af","Type":"ContainerStarted","Data":"c4af6e6369d1cdb43a9065a2f4a955e7fe54ac0be43f24f864696fd83ad18d7a"} Jan 27 18:26:05 crc kubenswrapper[5049]: I0127 18:26:05.320311 5049 generic.go:334] "Generic (PLEG): container finished" podID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" containerID="a676ff77ef2f6cf4af6ec85076d652b0ae08215c6769deedaffefd1cd057d538" exitCode=0 Jan 27 18:26:05 crc kubenswrapper[5049]: I0127 18:26:05.320346 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" event={"ID":"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a","Type":"ContainerDied","Data":"a676ff77ef2f6cf4af6ec85076d652b0ae08215c6769deedaffefd1cd057d538"} Jan 27 18:26:05 crc kubenswrapper[5049]: I0127 18:26:05.320364 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" event={"ID":"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a","Type":"ContainerStarted","Data":"4a5e4073459e33ba765dc06da6b9c62c30470cc03d3a9b314c8b1de89bca81d0"} Jan 27 18:26:05 crc kubenswrapper[5049]: I0127 18:26:05.347944 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8h4kg" podStartSLOduration=2.347925227 podStartE2EDuration="2.347925227s" podCreationTimestamp="2026-01-27 18:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:05.346547609 +0000 UTC m=+5340.445521198" watchObservedRunningTime="2026-01-27 18:26:05.347925227 +0000 UTC m=+5340.446898776" Jan 27 18:26:06 crc kubenswrapper[5049]: I0127 18:26:06.347230 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" event={"ID":"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a","Type":"ContainerStarted","Data":"a720b51cb41199d70274684da5e050f320a48ffe3ab4716df1a4c4cc98087d64"} Jan 27 18:26:06 crc kubenswrapper[5049]: I0127 18:26:06.347907 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:06 crc kubenswrapper[5049]: I0127 18:26:06.377830 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" podStartSLOduration=3.377814265 podStartE2EDuration="3.377814265s" podCreationTimestamp="2026-01-27 18:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:06.369983405 +0000 UTC m=+5341.468956964" watchObservedRunningTime="2026-01-27 18:26:06.377814265 +0000 UTC m=+5341.476787814" Jan 27 18:26:08 crc kubenswrapper[5049]: I0127 18:26:08.457197 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 18:26:09 crc kubenswrapper[5049]: I0127 18:26:09.368216 5049 generic.go:334] "Generic (PLEG): container finished" podID="904b0c70-2d4b-4682-abee-dc3437b217af" containerID="406935adc235719e0dbb011b271ee7dc773a95df6b6bd343f3fd77f363491951" exitCode=0 Jan 27 18:26:09 crc kubenswrapper[5049]: I0127 18:26:09.368254 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8h4kg" event={"ID":"904b0c70-2d4b-4682-abee-dc3437b217af","Type":"ContainerDied","Data":"406935adc235719e0dbb011b271ee7dc773a95df6b6bd343f3fd77f363491951"} Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.828439 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.968963 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-scripts\") pod \"904b0c70-2d4b-4682-abee-dc3437b217af\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.969077 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2m8b\" (UniqueName: \"kubernetes.io/projected/904b0c70-2d4b-4682-abee-dc3437b217af-kube-api-access-s2m8b\") pod \"904b0c70-2d4b-4682-abee-dc3437b217af\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.969103 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-config-data\") pod \"904b0c70-2d4b-4682-abee-dc3437b217af\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.969124 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-credential-keys\") pod \"904b0c70-2d4b-4682-abee-dc3437b217af\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.969188 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-fernet-keys\") pod \"904b0c70-2d4b-4682-abee-dc3437b217af\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.969227 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-combined-ca-bundle\") pod \"904b0c70-2d4b-4682-abee-dc3437b217af\" (UID: \"904b0c70-2d4b-4682-abee-dc3437b217af\") " Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.974479 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-scripts" (OuterVolumeSpecName: "scripts") pod "904b0c70-2d4b-4682-abee-dc3437b217af" (UID: "904b0c70-2d4b-4682-abee-dc3437b217af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.974476 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "904b0c70-2d4b-4682-abee-dc3437b217af" (UID: "904b0c70-2d4b-4682-abee-dc3437b217af"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.974641 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "904b0c70-2d4b-4682-abee-dc3437b217af" (UID: "904b0c70-2d4b-4682-abee-dc3437b217af"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:10 crc kubenswrapper[5049]: I0127 18:26:10.975568 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904b0c70-2d4b-4682-abee-dc3437b217af-kube-api-access-s2m8b" (OuterVolumeSpecName: "kube-api-access-s2m8b") pod "904b0c70-2d4b-4682-abee-dc3437b217af" (UID: "904b0c70-2d4b-4682-abee-dc3437b217af"). InnerVolumeSpecName "kube-api-access-s2m8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.009001 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "904b0c70-2d4b-4682-abee-dc3437b217af" (UID: "904b0c70-2d4b-4682-abee-dc3437b217af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.009433 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-config-data" (OuterVolumeSpecName: "config-data") pod "904b0c70-2d4b-4682-abee-dc3437b217af" (UID: "904b0c70-2d4b-4682-abee-dc3437b217af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.071132 5049 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.071174 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.071217 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.071230 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2m8b\" (UniqueName: \"kubernetes.io/projected/904b0c70-2d4b-4682-abee-dc3437b217af-kube-api-access-s2m8b\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.071241 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.071252 5049 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/904b0c70-2d4b-4682-abee-dc3437b217af-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.412689 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8h4kg" event={"ID":"904b0c70-2d4b-4682-abee-dc3437b217af","Type":"ContainerDied","Data":"c4af6e6369d1cdb43a9065a2f4a955e7fe54ac0be43f24f864696fd83ad18d7a"} Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.413012 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4af6e6369d1cdb43a9065a2f4a955e7fe54ac0be43f24f864696fd83ad18d7a" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.412778 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8h4kg" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.477177 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8h4kg"] Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.485244 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8h4kg"] Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.563229 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dq7vd"] Jan 27 18:26:11 crc kubenswrapper[5049]: E0127 18:26:11.563619 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904b0c70-2d4b-4682-abee-dc3437b217af" containerName="keystone-bootstrap" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.563650 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="904b0c70-2d4b-4682-abee-dc3437b217af" containerName="keystone-bootstrap" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.563878 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="904b0c70-2d4b-4682-abee-dc3437b217af" containerName="keystone-bootstrap" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.564594 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.569174 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.569411 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.569655 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.569822 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tncdc" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.570001 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.579897 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-credential-keys\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.579945 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-config-data\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.579977 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-scripts\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.580036 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-fernet-keys\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.580082 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbpsj\" (UniqueName: \"kubernetes.io/projected/0fca580b-5ec6-433f-872d-f2b2af2445f8-kube-api-access-tbpsj\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.580110 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-combined-ca-bundle\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.581264 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dq7vd"] Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.658422 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904b0c70-2d4b-4682-abee-dc3437b217af" path="/var/lib/kubelet/pods/904b0c70-2d4b-4682-abee-dc3437b217af/volumes" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.682561 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-credential-keys\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.682600 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-config-data\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.682623 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-scripts\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.682726 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-fernet-keys\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.682750 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbpsj\" (UniqueName: \"kubernetes.io/projected/0fca580b-5ec6-433f-872d-f2b2af2445f8-kube-api-access-tbpsj\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.682780 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-combined-ca-bundle\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.687925 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-fernet-keys\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.688618 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-config-data\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.689048 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-combined-ca-bundle\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.689343 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-scripts\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.690550 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-credential-keys\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.699446 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbpsj\" (UniqueName: \"kubernetes.io/projected/0fca580b-5ec6-433f-872d-f2b2af2445f8-kube-api-access-tbpsj\") pod \"keystone-bootstrap-dq7vd\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:11 crc kubenswrapper[5049]: I0127 18:26:11.879968 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:12 crc kubenswrapper[5049]: I0127 18:26:12.121309 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:26:12 crc kubenswrapper[5049]: I0127 18:26:12.179429 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:26:12 crc kubenswrapper[5049]: I0127 18:26:12.355659 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dq7vd"] Jan 27 18:26:12 crc kubenswrapper[5049]: W0127 18:26:12.359861 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fca580b_5ec6_433f_872d_f2b2af2445f8.slice/crio-758fc0778616b92cdeef3187b9ff959a9d27b11c1d7c169962bef57369aeaa13 WatchSource:0}: Error finding container 758fc0778616b92cdeef3187b9ff959a9d27b11c1d7c169962bef57369aeaa13: Status 404 returned error can't find the container with id 758fc0778616b92cdeef3187b9ff959a9d27b11c1d7c169962bef57369aeaa13 Jan 27 18:26:12 crc kubenswrapper[5049]: I0127 18:26:12.374725 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfrws"] Jan 27 18:26:12 crc kubenswrapper[5049]: I0127 18:26:12.423257 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dq7vd" event={"ID":"0fca580b-5ec6-433f-872d-f2b2af2445f8","Type":"ContainerStarted","Data":"758fc0778616b92cdeef3187b9ff959a9d27b11c1d7c169962bef57369aeaa13"} Jan 27 18:26:13 crc kubenswrapper[5049]: I0127 18:26:13.435118 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dq7vd" event={"ID":"0fca580b-5ec6-433f-872d-f2b2af2445f8","Type":"ContainerStarted","Data":"c785cd7c07377c39d96c952eb89eb570b13cdfad47e68265288a3e4873c6c2c8"} Jan 27 18:26:13 crc kubenswrapper[5049]: I0127 18:26:13.435399 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lfrws" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="registry-server" containerID="cri-o://28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423" gracePeriod=2 Jan 27 18:26:13 crc kubenswrapper[5049]: I0127 18:26:13.472437 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dq7vd" podStartSLOduration=2.472422282 podStartE2EDuration="2.472422282s" podCreationTimestamp="2026-01-27 18:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:13.471231668 +0000 UTC m=+5348.570205217" watchObservedRunningTime="2026-01-27 18:26:13.472422282 +0000 UTC m=+5348.571395831" Jan 27 18:26:13 crc kubenswrapper[5049]: I0127 18:26:13.858823 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:26:13 crc kubenswrapper[5049]: I0127 18:26:13.915217 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f5b99fb57-926hv"] Jan 27 18:26:13 crc kubenswrapper[5049]: I0127 18:26:13.915833 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" podUID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" containerName="dnsmasq-dns" containerID="cri-o://67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3" gracePeriod=10 Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.013294 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.021011 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-utilities\") pod \"9207912b-b3a3-4915-8191-3f2783ab5a8d\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.021068 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd4pl\" (UniqueName: \"kubernetes.io/projected/9207912b-b3a3-4915-8191-3f2783ab5a8d-kube-api-access-qd4pl\") pod \"9207912b-b3a3-4915-8191-3f2783ab5a8d\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.021133 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-catalog-content\") pod \"9207912b-b3a3-4915-8191-3f2783ab5a8d\" (UID: \"9207912b-b3a3-4915-8191-3f2783ab5a8d\") " Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.022668 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-utilities" (OuterVolumeSpecName: "utilities") pod "9207912b-b3a3-4915-8191-3f2783ab5a8d" (UID: "9207912b-b3a3-4915-8191-3f2783ab5a8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.027503 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9207912b-b3a3-4915-8191-3f2783ab5a8d-kube-api-access-qd4pl" (OuterVolumeSpecName: "kube-api-access-qd4pl") pod "9207912b-b3a3-4915-8191-3f2783ab5a8d" (UID: "9207912b-b3a3-4915-8191-3f2783ab5a8d"). InnerVolumeSpecName "kube-api-access-qd4pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.124751 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.124849 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd4pl\" (UniqueName: \"kubernetes.io/projected/9207912b-b3a3-4915-8191-3f2783ab5a8d-kube-api-access-qd4pl\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.174419 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9207912b-b3a3-4915-8191-3f2783ab5a8d" (UID: "9207912b-b3a3-4915-8191-3f2783ab5a8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.226424 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9207912b-b3a3-4915-8191-3f2783ab5a8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.423496 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.429390 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-sb\") pod \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.429451 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-config\") pod \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.429506 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-nb\") pod \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.429586 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc7n2\" (UniqueName: \"kubernetes.io/projected/23e8940e-bf1e-446b-9a25-d76674e9a6c9-kube-api-access-sc7n2\") pod \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.429705 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-dns-svc\") pod \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\" (UID: \"23e8940e-bf1e-446b-9a25-d76674e9a6c9\") " Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.435414 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e8940e-bf1e-446b-9a25-d76674e9a6c9-kube-api-access-sc7n2" (OuterVolumeSpecName: "kube-api-access-sc7n2") pod "23e8940e-bf1e-446b-9a25-d76674e9a6c9" (UID: "23e8940e-bf1e-446b-9a25-d76674e9a6c9"). InnerVolumeSpecName "kube-api-access-sc7n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.448873 5049 generic.go:334] "Generic (PLEG): container finished" podID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerID="28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423" exitCode=0 Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.448929 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfrws" event={"ID":"9207912b-b3a3-4915-8191-3f2783ab5a8d","Type":"ContainerDied","Data":"28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423"} Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.448978 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfrws" event={"ID":"9207912b-b3a3-4915-8191-3f2783ab5a8d","Type":"ContainerDied","Data":"b8a2fb3cc70ae6c74cd4626ef5eca0bb25a3b952f10b97f2e23a059bee84abc6"} Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.448986 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfrws" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.448997 5049 scope.go:117] "RemoveContainer" containerID="28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.455746 5049 generic.go:334] "Generic (PLEG): container finished" podID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" containerID="67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3" exitCode=0 Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.456045 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.456374 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" event={"ID":"23e8940e-bf1e-446b-9a25-d76674e9a6c9","Type":"ContainerDied","Data":"67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3"} Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.456413 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f5b99fb57-926hv" event={"ID":"23e8940e-bf1e-446b-9a25-d76674e9a6c9","Type":"ContainerDied","Data":"cb6bf48afdce595e8d9d5e7f6689998610c3015d6b9d84fdf07d72155c97b052"} Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.482287 5049 scope.go:117] "RemoveContainer" containerID="93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.506183 5049 scope.go:117] "RemoveContainer" containerID="c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.514357 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-config" (OuterVolumeSpecName: "config") pod "23e8940e-bf1e-446b-9a25-d76674e9a6c9" (UID: "23e8940e-bf1e-446b-9a25-d76674e9a6c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.516372 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23e8940e-bf1e-446b-9a25-d76674e9a6c9" (UID: "23e8940e-bf1e-446b-9a25-d76674e9a6c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.517733 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfrws"] Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.520413 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23e8940e-bf1e-446b-9a25-d76674e9a6c9" (UID: "23e8940e-bf1e-446b-9a25-d76674e9a6c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.523495 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23e8940e-bf1e-446b-9a25-d76674e9a6c9" (UID: "23e8940e-bf1e-446b-9a25-d76674e9a6c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.531722 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.531755 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.531767 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.531775 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23e8940e-bf1e-446b-9a25-d76674e9a6c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.531785 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc7n2\" (UniqueName: \"kubernetes.io/projected/23e8940e-bf1e-446b-9a25-d76674e9a6c9-kube-api-access-sc7n2\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.536096 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lfrws"] Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.543244 5049 scope.go:117] "RemoveContainer" containerID="28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423" Jan 27 18:26:14 crc kubenswrapper[5049]: E0127 18:26:14.545101 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423\": container with ID starting with 28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423 not found: ID does not exist" containerID="28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.545154 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423"} err="failed to get container status \"28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423\": rpc error: code = NotFound desc = could not find container \"28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423\": container with ID starting with 28ed34fc39f72749146f87d71b233d0a1b077240014a1fa952ae1d8fcb06d423 not found: ID does not exist" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.545183 5049 scope.go:117] "RemoveContainer" containerID="93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f" Jan 27 18:26:14 crc kubenswrapper[5049]: E0127 18:26:14.548905 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f\": container with ID starting with 93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f not found: ID does not exist" containerID="93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.548940 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f"} err="failed to get container status \"93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f\": rpc error: code = NotFound desc = could not find container \"93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f\": container with ID starting with 93718dac57fc8897a375e725503b16649fe33dd181b37f220d24e0a25576e30f not found: ID does not exist" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.548964 5049 scope.go:117] "RemoveContainer" containerID="c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22" Jan 27 18:26:14 crc kubenswrapper[5049]: E0127 18:26:14.552927 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22\": container with ID starting with c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22 not found: ID does not exist" containerID="c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.552964 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22"} err="failed to get container status \"c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22\": rpc error: code = NotFound desc = could not find container \"c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22\": container with ID starting with c7b4193fa2738e040a0dec3efdb322be7039e94695da4b1dc77c9d838cf2ee22 not found: ID does not exist" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.552989 5049 scope.go:117] "RemoveContainer" containerID="67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.583277 5049 scope.go:117] "RemoveContainer" containerID="20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.607499 5049 scope.go:117] "RemoveContainer" containerID="67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3" Jan 27 18:26:14 crc kubenswrapper[5049]: E0127 18:26:14.608031 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3\": container with ID starting with 67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3 not found: ID does not exist" containerID="67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.608080 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3"} err="failed to get container status \"67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3\": rpc error: code = NotFound desc = could not find container \"67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3\": container with ID starting with 67bc370eba8434130a82f2a8871561fbdfac8b51089be32800abe53a188b7ed3 not found: ID does not exist" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.608106 5049 scope.go:117] "RemoveContainer" containerID="20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391" Jan 27 18:26:14 crc kubenswrapper[5049]: E0127 18:26:14.608343 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391\": container with ID starting with 20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391 not found: ID does not exist" containerID="20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.608387 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391"} err="failed to get container status \"20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391\": rpc error: code = NotFound desc = could not find container \"20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391\": container with ID starting with 20067cd66a3809551f43d1bb9e566dd71918cd0398104e11cc8e0f0163828391 not found: ID does not exist" Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.793261 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f5b99fb57-926hv"] Jan 27 18:26:14 crc kubenswrapper[5049]: I0127 18:26:14.800869 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f5b99fb57-926hv"] Jan 27 18:26:15 crc kubenswrapper[5049]: I0127 18:26:15.468643 5049 generic.go:334] "Generic (PLEG): container finished" podID="0fca580b-5ec6-433f-872d-f2b2af2445f8" containerID="c785cd7c07377c39d96c952eb89eb570b13cdfad47e68265288a3e4873c6c2c8" exitCode=0 Jan 27 18:26:15 crc kubenswrapper[5049]: I0127 18:26:15.468703 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dq7vd" event={"ID":"0fca580b-5ec6-433f-872d-f2b2af2445f8","Type":"ContainerDied","Data":"c785cd7c07377c39d96c952eb89eb570b13cdfad47e68265288a3e4873c6c2c8"} Jan 27 18:26:15 crc kubenswrapper[5049]: I0127 18:26:15.656485 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" path="/var/lib/kubelet/pods/23e8940e-bf1e-446b-9a25-d76674e9a6c9/volumes" Jan 27 18:26:15 crc kubenswrapper[5049]: I0127 18:26:15.657104 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" path="/var/lib/kubelet/pods/9207912b-b3a3-4915-8191-3f2783ab5a8d/volumes" Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.646418 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:26:16 crc kubenswrapper[5049]: E0127 18:26:16.646625 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.807565 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.966944 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-fernet-keys\") pod \"0fca580b-5ec6-433f-872d-f2b2af2445f8\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.967034 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-combined-ca-bundle\") pod \"0fca580b-5ec6-433f-872d-f2b2af2445f8\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.967123 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-config-data\") pod \"0fca580b-5ec6-433f-872d-f2b2af2445f8\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.967174 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-credential-keys\") pod \"0fca580b-5ec6-433f-872d-f2b2af2445f8\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.967206 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbpsj\" (UniqueName: \"kubernetes.io/projected/0fca580b-5ec6-433f-872d-f2b2af2445f8-kube-api-access-tbpsj\") pod \"0fca580b-5ec6-433f-872d-f2b2af2445f8\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.967269 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-scripts\") pod \"0fca580b-5ec6-433f-872d-f2b2af2445f8\" (UID: \"0fca580b-5ec6-433f-872d-f2b2af2445f8\") " Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.972662 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-scripts" (OuterVolumeSpecName: "scripts") pod "0fca580b-5ec6-433f-872d-f2b2af2445f8" (UID: "0fca580b-5ec6-433f-872d-f2b2af2445f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.973052 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0fca580b-5ec6-433f-872d-f2b2af2445f8" (UID: "0fca580b-5ec6-433f-872d-f2b2af2445f8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.973864 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0fca580b-5ec6-433f-872d-f2b2af2445f8" (UID: "0fca580b-5ec6-433f-872d-f2b2af2445f8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.974311 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fca580b-5ec6-433f-872d-f2b2af2445f8-kube-api-access-tbpsj" (OuterVolumeSpecName: "kube-api-access-tbpsj") pod "0fca580b-5ec6-433f-872d-f2b2af2445f8" (UID: "0fca580b-5ec6-433f-872d-f2b2af2445f8"). InnerVolumeSpecName "kube-api-access-tbpsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.993106 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fca580b-5ec6-433f-872d-f2b2af2445f8" (UID: "0fca580b-5ec6-433f-872d-f2b2af2445f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:16 crc kubenswrapper[5049]: I0127 18:26:16.994468 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-config-data" (OuterVolumeSpecName: "config-data") pod "0fca580b-5ec6-433f-872d-f2b2af2445f8" (UID: "0fca580b-5ec6-433f-872d-f2b2af2445f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.068582 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.068621 5049 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.068641 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbpsj\" (UniqueName: \"kubernetes.io/projected/0fca580b-5ec6-433f-872d-f2b2af2445f8-kube-api-access-tbpsj\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.068653 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.068685 5049 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.068698 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fca580b-5ec6-433f-872d-f2b2af2445f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.490988 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dq7vd" event={"ID":"0fca580b-5ec6-433f-872d-f2b2af2445f8","Type":"ContainerDied","Data":"758fc0778616b92cdeef3187b9ff959a9d27b11c1d7c169962bef57369aeaa13"} Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.491511 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="758fc0778616b92cdeef3187b9ff959a9d27b11c1d7c169962bef57369aeaa13" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.491090 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dq7vd" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.600965 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7f8ddf49c9-t4b7l"] Jan 27 18:26:17 crc kubenswrapper[5049]: E0127 18:26:17.601700 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fca580b-5ec6-433f-872d-f2b2af2445f8" containerName="keystone-bootstrap" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.601835 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fca580b-5ec6-433f-872d-f2b2af2445f8" containerName="keystone-bootstrap" Jan 27 18:26:17 crc kubenswrapper[5049]: E0127 18:26:17.603047 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" containerName="init" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.603167 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" containerName="init" Jan 27 18:26:17 crc kubenswrapper[5049]: E0127 18:26:17.603265 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="registry-server" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.603383 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="registry-server" Jan 27 18:26:17 crc kubenswrapper[5049]: E0127 18:26:17.603501 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="extract-content" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.603594 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="extract-content" Jan 27 18:26:17 crc kubenswrapper[5049]: E0127 18:26:17.603715 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" containerName="dnsmasq-dns" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.603807 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" containerName="dnsmasq-dns" Jan 27 18:26:17 crc kubenswrapper[5049]: E0127 18:26:17.603908 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="extract-utilities" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.603995 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="extract-utilities" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.604429 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9207912b-b3a3-4915-8191-3f2783ab5a8d" containerName="registry-server" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.604574 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e8940e-bf1e-446b-9a25-d76674e9a6c9" containerName="dnsmasq-dns" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.604692 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fca580b-5ec6-433f-872d-f2b2af2445f8" containerName="keystone-bootstrap" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.605659 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.607571 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tncdc" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.609488 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.610025 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.611452 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.622357 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f8ddf49c9-t4b7l"] Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.779334 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-fernet-keys\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.779384 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-scripts\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.779474 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-combined-ca-bundle\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.780000 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-config-data\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.780035 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrx4r\" (UniqueName: \"kubernetes.io/projected/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-kube-api-access-hrx4r\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.780144 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-credential-keys\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.881506 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-fernet-keys\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.881590 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-scripts\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.881755 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-combined-ca-bundle\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.881985 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-config-data\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.882545 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrx4r\" (UniqueName: \"kubernetes.io/projected/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-kube-api-access-hrx4r\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.882714 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-credential-keys\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.886294 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-credential-keys\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.894082 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-scripts\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.895512 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-config-data\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.896850 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-combined-ca-bundle\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.897766 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-fernet-keys\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.899982 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrx4r\" (UniqueName: \"kubernetes.io/projected/a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f-kube-api-access-hrx4r\") pod \"keystone-7f8ddf49c9-t4b7l\" (UID: \"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f\") " pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:17 crc kubenswrapper[5049]: I0127 18:26:17.930497 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:18 crc kubenswrapper[5049]: I0127 18:26:18.401336 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f8ddf49c9-t4b7l"] Jan 27 18:26:18 crc kubenswrapper[5049]: I0127 18:26:18.497867 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f8ddf49c9-t4b7l" event={"ID":"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f","Type":"ContainerStarted","Data":"73e5560e9139b5617df93ce60f024643d5515dd1da51e4d81183007f509b10c5"} Jan 27 18:26:19 crc kubenswrapper[5049]: I0127 18:26:19.508771 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f8ddf49c9-t4b7l" event={"ID":"a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f","Type":"ContainerStarted","Data":"0c1ac30a3007592aa40797b20be3f585a1309cc93f21d4c1c2716b09239d19e5"} Jan 27 18:26:19 crc kubenswrapper[5049]: I0127 18:26:19.509802 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:19 crc kubenswrapper[5049]: I0127 18:26:19.536791 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7f8ddf49c9-t4b7l" podStartSLOduration=2.536767259 podStartE2EDuration="2.536767259s" podCreationTimestamp="2026-01-27 18:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:19.53037996 +0000 UTC m=+5354.629353519" watchObservedRunningTime="2026-01-27 18:26:19.536767259 +0000 UTC m=+5354.635740818" Jan 27 18:26:31 crc kubenswrapper[5049]: I0127 18:26:31.646563 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:26:31 crc kubenswrapper[5049]: E0127 18:26:31.647459 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:26:42 crc kubenswrapper[5049]: I0127 18:26:42.645835 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:26:42 crc kubenswrapper[5049]: E0127 18:26:42.646626 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:26:49 crc kubenswrapper[5049]: I0127 18:26:49.457202 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7f8ddf49c9-t4b7l" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.126143 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.127510 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.129211 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.130858 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.131882 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-shbf4" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.182169 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.216917 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/260481f1-9af6-4824-bf84-18e080e5e1a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.217026 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/260481f1-9af6-4824-bf84-18e080e5e1a6-openstack-config\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.217070 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4bfg\" (UniqueName: \"kubernetes.io/projected/260481f1-9af6-4824-bf84-18e080e5e1a6-kube-api-access-d4bfg\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.318992 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/260481f1-9af6-4824-bf84-18e080e5e1a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.319063 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/260481f1-9af6-4824-bf84-18e080e5e1a6-openstack-config\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.319089 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4bfg\" (UniqueName: \"kubernetes.io/projected/260481f1-9af6-4824-bf84-18e080e5e1a6-kube-api-access-d4bfg\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.320698 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/260481f1-9af6-4824-bf84-18e080e5e1a6-openstack-config\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.333227 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/260481f1-9af6-4824-bf84-18e080e5e1a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.409162 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4bfg\" (UniqueName: \"kubernetes.io/projected/260481f1-9af6-4824-bf84-18e080e5e1a6-kube-api-access-d4bfg\") pod \"openstackclient\" (UID: \"260481f1-9af6-4824-bf84-18e080e5e1a6\") " pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.465128 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.646547 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:26:53 crc kubenswrapper[5049]: E0127 18:26:53.647290 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:26:53 crc kubenswrapper[5049]: I0127 18:26:53.907916 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 18:26:54 crc kubenswrapper[5049]: I0127 18:26:54.788551 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"260481f1-9af6-4824-bf84-18e080e5e1a6","Type":"ContainerStarted","Data":"3a349dba103f664ffc2b48e1119b2c8d63e695e19b998b786ce9bcb77e75eb17"} Jan 27 18:26:54 crc kubenswrapper[5049]: I0127 18:26:54.788894 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"260481f1-9af6-4824-bf84-18e080e5e1a6","Type":"ContainerStarted","Data":"d4fd3425087ca09db05c96b54da2fd07388c659710fdf1a0225a2ed8071f2580"} Jan 27 18:27:07 crc kubenswrapper[5049]: I0127 18:27:07.645433 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:27:07 crc kubenswrapper[5049]: E0127 18:27:07.645980 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.533508 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=16.533475271 podStartE2EDuration="16.533475271s" podCreationTimestamp="2026-01-27 18:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:26:54.806652792 +0000 UTC m=+5389.905626341" watchObservedRunningTime="2026-01-27 18:27:09.533475271 +0000 UTC m=+5404.632448820" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.544376 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-62vm5"] Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.547400 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.555382 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62vm5"] Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.692582 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmkj\" (UniqueName: \"kubernetes.io/projected/9423fc55-0e65-4e83-b311-d63adcaeb162-kube-api-access-lsmkj\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.692648 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-utilities\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.692753 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-catalog-content\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.794932 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsmkj\" (UniqueName: \"kubernetes.io/projected/9423fc55-0e65-4e83-b311-d63adcaeb162-kube-api-access-lsmkj\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.795006 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-utilities\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.795075 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-catalog-content\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.795537 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-catalog-content\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.795837 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-utilities\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.819089 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsmkj\" (UniqueName: \"kubernetes.io/projected/9423fc55-0e65-4e83-b311-d63adcaeb162-kube-api-access-lsmkj\") pod \"redhat-marketplace-62vm5\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:09 crc kubenswrapper[5049]: I0127 18:27:09.880368 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:10 crc kubenswrapper[5049]: I0127 18:27:10.360400 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62vm5"] Jan 27 18:27:10 crc kubenswrapper[5049]: W0127 18:27:10.366642 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9423fc55_0e65_4e83_b311_d63adcaeb162.slice/crio-039d9cdd742af16431acf5f61ef6fa1cb096fb3bbc0a56835081658b9a751d6d WatchSource:0}: Error finding container 039d9cdd742af16431acf5f61ef6fa1cb096fb3bbc0a56835081658b9a751d6d: Status 404 returned error can't find the container with id 039d9cdd742af16431acf5f61ef6fa1cb096fb3bbc0a56835081658b9a751d6d Jan 27 18:27:10 crc kubenswrapper[5049]: I0127 18:27:10.932027 5049 generic.go:334] "Generic (PLEG): container finished" podID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerID="0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50" exitCode=0 Jan 27 18:27:10 crc kubenswrapper[5049]: I0127 18:27:10.932231 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62vm5" event={"ID":"9423fc55-0e65-4e83-b311-d63adcaeb162","Type":"ContainerDied","Data":"0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50"} Jan 27 18:27:10 crc kubenswrapper[5049]: I0127 18:27:10.932375 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62vm5" event={"ID":"9423fc55-0e65-4e83-b311-d63adcaeb162","Type":"ContainerStarted","Data":"039d9cdd742af16431acf5f61ef6fa1cb096fb3bbc0a56835081658b9a751d6d"} Jan 27 18:27:12 crc kubenswrapper[5049]: I0127 18:27:12.949587 5049 generic.go:334] "Generic (PLEG): container finished" podID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerID="4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2" exitCode=0 Jan 27 18:27:12 crc kubenswrapper[5049]: I0127 18:27:12.949709 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62vm5" event={"ID":"9423fc55-0e65-4e83-b311-d63adcaeb162","Type":"ContainerDied","Data":"4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2"} Jan 27 18:27:13 crc kubenswrapper[5049]: I0127 18:27:13.963303 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62vm5" event={"ID":"9423fc55-0e65-4e83-b311-d63adcaeb162","Type":"ContainerStarted","Data":"68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631"} Jan 27 18:27:13 crc kubenswrapper[5049]: I0127 18:27:13.993646 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-62vm5" podStartSLOduration=2.578553164 podStartE2EDuration="4.993621992s" podCreationTimestamp="2026-01-27 18:27:09 +0000 UTC" firstStartedPulling="2026-01-27 18:27:10.934599142 +0000 UTC m=+5406.033572731" lastFinishedPulling="2026-01-27 18:27:13.349668 +0000 UTC m=+5408.448641559" observedRunningTime="2026-01-27 18:27:13.983953749 +0000 UTC m=+5409.082927318" watchObservedRunningTime="2026-01-27 18:27:13.993621992 +0000 UTC m=+5409.092595551" Jan 27 18:27:19 crc kubenswrapper[5049]: I0127 18:27:19.881229 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:19 crc kubenswrapper[5049]: I0127 18:27:19.881741 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:19 crc kubenswrapper[5049]: I0127 18:27:19.932694 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:20 crc kubenswrapper[5049]: I0127 18:27:20.060157 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:20 crc kubenswrapper[5049]: I0127 18:27:20.171262 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62vm5"] Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.022040 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-62vm5" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerName="registry-server" containerID="cri-o://68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631" gracePeriod=2 Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.444339 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.523532 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-utilities\") pod \"9423fc55-0e65-4e83-b311-d63adcaeb162\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.523593 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-catalog-content\") pod \"9423fc55-0e65-4e83-b311-d63adcaeb162\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.523619 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsmkj\" (UniqueName: \"kubernetes.io/projected/9423fc55-0e65-4e83-b311-d63adcaeb162-kube-api-access-lsmkj\") pod \"9423fc55-0e65-4e83-b311-d63adcaeb162\" (UID: \"9423fc55-0e65-4e83-b311-d63adcaeb162\") " Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.524655 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-utilities" (OuterVolumeSpecName: "utilities") pod "9423fc55-0e65-4e83-b311-d63adcaeb162" (UID: "9423fc55-0e65-4e83-b311-d63adcaeb162"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.528741 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9423fc55-0e65-4e83-b311-d63adcaeb162-kube-api-access-lsmkj" (OuterVolumeSpecName: "kube-api-access-lsmkj") pod "9423fc55-0e65-4e83-b311-d63adcaeb162" (UID: "9423fc55-0e65-4e83-b311-d63adcaeb162"). InnerVolumeSpecName "kube-api-access-lsmkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.548278 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9423fc55-0e65-4e83-b311-d63adcaeb162" (UID: "9423fc55-0e65-4e83-b311-d63adcaeb162"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.626335 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.626391 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9423fc55-0e65-4e83-b311-d63adcaeb162-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.626413 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsmkj\" (UniqueName: \"kubernetes.io/projected/9423fc55-0e65-4e83-b311-d63adcaeb162-kube-api-access-lsmkj\") on node \"crc\" DevicePath \"\"" Jan 27 18:27:22 crc kubenswrapper[5049]: I0127 18:27:22.646854 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:27:22 crc kubenswrapper[5049]: E0127 18:27:22.647252 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.038213 5049 generic.go:334] "Generic (PLEG): container finished" podID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerID="68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631" exitCode=0 Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.038267 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62vm5" event={"ID":"9423fc55-0e65-4e83-b311-d63adcaeb162","Type":"ContainerDied","Data":"68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631"} Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.038606 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62vm5" event={"ID":"9423fc55-0e65-4e83-b311-d63adcaeb162","Type":"ContainerDied","Data":"039d9cdd742af16431acf5f61ef6fa1cb096fb3bbc0a56835081658b9a751d6d"} Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.038633 5049 scope.go:117] "RemoveContainer" containerID="68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.038308 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62vm5" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.065396 5049 scope.go:117] "RemoveContainer" containerID="4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.112757 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-62vm5"] Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.116948 5049 scope.go:117] "RemoveContainer" containerID="0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.130183 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-62vm5"] Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.161759 5049 scope.go:117] "RemoveContainer" containerID="68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631" Jan 27 18:27:23 crc kubenswrapper[5049]: E0127 18:27:23.162255 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631\": container with ID starting with 68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631 not found: ID does not exist" containerID="68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.162377 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631"} err="failed to get container status \"68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631\": rpc error: code = NotFound desc = could not find container \"68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631\": container with ID starting with 68af0ac7483f8a3dbd1566737ec379eea202d41c0d01407648b0efb08eae0631 not found: ID does not exist" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.162485 5049 scope.go:117] "RemoveContainer" containerID="4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2" Jan 27 18:27:23 crc kubenswrapper[5049]: E0127 18:27:23.164017 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2\": container with ID starting with 4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2 not found: ID does not exist" containerID="4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.164163 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2"} err="failed to get container status \"4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2\": rpc error: code = NotFound desc = could not find container \"4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2\": container with ID starting with 4c1d490f03b0970fcddbf5526c8b16aee549972c5821d0d42e992f0b854421b2 not found: ID does not exist" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.164263 5049 scope.go:117] "RemoveContainer" containerID="0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50" Jan 27 18:27:23 crc kubenswrapper[5049]: E0127 18:27:23.165257 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50\": container with ID starting with 0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50 not found: ID does not exist" containerID="0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.165366 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50"} err="failed to get container status \"0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50\": rpc error: code = NotFound desc = could not find container \"0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50\": container with ID starting with 0e6912af7281284a098b686466ece3088ef7c1d41dde23cbf7ad460fe6a58f50 not found: ID does not exist" Jan 27 18:27:23 crc kubenswrapper[5049]: I0127 18:27:23.654946 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" path="/var/lib/kubelet/pods/9423fc55-0e65-4e83-b311-d63adcaeb162/volumes" Jan 27 18:27:36 crc kubenswrapper[5049]: I0127 18:27:36.646991 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:27:36 crc kubenswrapper[5049]: E0127 18:27:36.647739 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:27:47 crc kubenswrapper[5049]: I0127 18:27:47.646571 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:27:47 crc kubenswrapper[5049]: E0127 18:27:47.647933 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:27:58 crc kubenswrapper[5049]: I0127 18:27:58.647185 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:27:58 crc kubenswrapper[5049]: E0127 18:27:58.647905 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.633020 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tg748"] Jan 27 18:28:06 crc kubenswrapper[5049]: E0127 18:28:06.634001 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerName="registry-server" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.634019 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerName="registry-server" Jan 27 18:28:06 crc kubenswrapper[5049]: E0127 18:28:06.634034 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerName="extract-content" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.634042 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerName="extract-content" Jan 27 18:28:06 crc kubenswrapper[5049]: E0127 18:28:06.634068 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerName="extract-utilities" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.634077 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerName="extract-utilities" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.634279 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9423fc55-0e65-4e83-b311-d63adcaeb162" containerName="registry-server" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.635435 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.656519 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tg748"] Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.752316 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txk4l\" (UniqueName: \"kubernetes.io/projected/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-kube-api-access-txk4l\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.752400 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-catalog-content\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.752524 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-utilities\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.854201 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txk4l\" (UniqueName: \"kubernetes.io/projected/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-kube-api-access-txk4l\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.854273 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-catalog-content\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.854333 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-utilities\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.854902 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-utilities\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.855378 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-catalog-content\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.882001 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txk4l\" (UniqueName: \"kubernetes.io/projected/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-kube-api-access-txk4l\") pod \"community-operators-tg748\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:06 crc kubenswrapper[5049]: I0127 18:28:06.971521 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:07 crc kubenswrapper[5049]: I0127 18:28:07.535015 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tg748"] Jan 27 18:28:07 crc kubenswrapper[5049]: W0127 18:28:07.541815 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3a189a0_a6ee_4c4f_9c3a_fe3b56d3b291.slice/crio-8799d9ab126e3ede78836746b7a5b6cdd06fd39454f39a87b260fabadd624bb1 WatchSource:0}: Error finding container 8799d9ab126e3ede78836746b7a5b6cdd06fd39454f39a87b260fabadd624bb1: Status 404 returned error can't find the container with id 8799d9ab126e3ede78836746b7a5b6cdd06fd39454f39a87b260fabadd624bb1 Jan 27 18:28:07 crc kubenswrapper[5049]: I0127 18:28:07.760471 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg748" event={"ID":"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291","Type":"ContainerStarted","Data":"b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd"} Jan 27 18:28:07 crc kubenswrapper[5049]: I0127 18:28:07.760882 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg748" event={"ID":"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291","Type":"ContainerStarted","Data":"8799d9ab126e3ede78836746b7a5b6cdd06fd39454f39a87b260fabadd624bb1"} Jan 27 18:28:08 crc kubenswrapper[5049]: I0127 18:28:08.770832 5049 generic.go:334] "Generic (PLEG): container finished" podID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerID="b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd" exitCode=0 Jan 27 18:28:08 crc kubenswrapper[5049]: I0127 18:28:08.770947 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg748" event={"ID":"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291","Type":"ContainerDied","Data":"b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd"} Jan 27 18:28:09 crc kubenswrapper[5049]: I0127 18:28:09.647581 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:28:09 crc kubenswrapper[5049]: E0127 18:28:09.647925 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:28:10 crc kubenswrapper[5049]: I0127 18:28:10.788571 5049 generic.go:334] "Generic (PLEG): container finished" podID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerID="cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13" exitCode=0 Jan 27 18:28:10 crc kubenswrapper[5049]: I0127 18:28:10.788900 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg748" event={"ID":"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291","Type":"ContainerDied","Data":"cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13"} Jan 27 18:28:11 crc kubenswrapper[5049]: I0127 18:28:11.825025 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg748" event={"ID":"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291","Type":"ContainerStarted","Data":"c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504"} Jan 27 18:28:11 crc kubenswrapper[5049]: I0127 18:28:11.850705 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tg748" podStartSLOduration=3.23638153 podStartE2EDuration="5.850687378s" podCreationTimestamp="2026-01-27 18:28:06 +0000 UTC" firstStartedPulling="2026-01-27 18:28:08.773807195 +0000 UTC m=+5463.872780744" lastFinishedPulling="2026-01-27 18:28:11.388113043 +0000 UTC m=+5466.487086592" observedRunningTime="2026-01-27 18:28:11.84579894 +0000 UTC m=+5466.944772509" watchObservedRunningTime="2026-01-27 18:28:11.850687378 +0000 UTC m=+5466.949660917" Jan 27 18:28:16 crc kubenswrapper[5049]: I0127 18:28:16.972354 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:16 crc kubenswrapper[5049]: I0127 18:28:16.973987 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:17 crc kubenswrapper[5049]: I0127 18:28:17.023453 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:17 crc kubenswrapper[5049]: I0127 18:28:17.061146 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vfktn"] Jan 27 18:28:17 crc kubenswrapper[5049]: I0127 18:28:17.069901 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vfktn"] Jan 27 18:28:17 crc kubenswrapper[5049]: I0127 18:28:17.664417 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9045d3d5-d562-40d5-b0f3-5261ca0ce8bd" path="/var/lib/kubelet/pods/9045d3d5-d562-40d5-b0f3-5261ca0ce8bd/volumes" Jan 27 18:28:17 crc kubenswrapper[5049]: I0127 18:28:17.934749 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:17 crc kubenswrapper[5049]: I0127 18:28:17.985631 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tg748"] Jan 27 18:28:19 crc kubenswrapper[5049]: I0127 18:28:19.898128 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tg748" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerName="registry-server" containerID="cri-o://c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504" gracePeriod=2 Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.315882 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.424652 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-utilities\") pod \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.424846 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-catalog-content\") pod \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.424917 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txk4l\" (UniqueName: \"kubernetes.io/projected/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-kube-api-access-txk4l\") pod \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\" (UID: \"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291\") " Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.426663 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-utilities" (OuterVolumeSpecName: "utilities") pod "e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" (UID: "e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.432477 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-kube-api-access-txk4l" (OuterVolumeSpecName: "kube-api-access-txk4l") pod "e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" (UID: "e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291"). InnerVolumeSpecName "kube-api-access-txk4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.491314 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" (UID: "e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.527006 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.527115 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.527134 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txk4l\" (UniqueName: \"kubernetes.io/projected/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291-kube-api-access-txk4l\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.909816 5049 generic.go:334] "Generic (PLEG): container finished" podID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerID="c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504" exitCode=0 Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.909906 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg748" event={"ID":"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291","Type":"ContainerDied","Data":"c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504"} Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.909942 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tg748" event={"ID":"e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291","Type":"ContainerDied","Data":"8799d9ab126e3ede78836746b7a5b6cdd06fd39454f39a87b260fabadd624bb1"} Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.910024 5049 scope.go:117] "RemoveContainer" containerID="c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.910175 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tg748" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.937174 5049 scope.go:117] "RemoveContainer" containerID="cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.958102 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tg748"] Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.967300 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tg748"] Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.970334 5049 scope.go:117] "RemoveContainer" containerID="b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.997655 5049 scope.go:117] "RemoveContainer" containerID="c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504" Jan 27 18:28:20 crc kubenswrapper[5049]: E0127 18:28:20.998288 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504\": container with ID starting with c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504 not found: ID does not exist" containerID="c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.998345 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504"} err="failed to get container status \"c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504\": rpc error: code = NotFound desc = could not find container \"c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504\": container with ID starting with c8b1690752e6e80c98c136389f5875143dda4c0d251a16e4d5a5ccb011686504 not found: ID does not exist" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.998381 5049 scope.go:117] "RemoveContainer" containerID="cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13" Jan 27 18:28:20 crc kubenswrapper[5049]: E0127 18:28:20.999107 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13\": container with ID starting with cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13 not found: ID does not exist" containerID="cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.999162 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13"} err="failed to get container status \"cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13\": rpc error: code = NotFound desc = could not find container \"cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13\": container with ID starting with cf7bef42b49a5f334eebcc9f90abdbe4c45b1ee36dc5f337f680eb0ffe827a13 not found: ID does not exist" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.999189 5049 scope.go:117] "RemoveContainer" containerID="b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd" Jan 27 18:28:20 crc kubenswrapper[5049]: E0127 18:28:20.999529 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd\": container with ID starting with b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd not found: ID does not exist" containerID="b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd" Jan 27 18:28:20 crc kubenswrapper[5049]: I0127 18:28:20.999586 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd"} err="failed to get container status \"b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd\": rpc error: code = NotFound desc = could not find container \"b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd\": container with ID starting with b9a18796ba998b231e23d890e581bf2c9320235336d19a2d1bdc46e1bbeeb9dd not found: ID does not exist" Jan 27 18:28:21 crc kubenswrapper[5049]: I0127 18:28:21.656277 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" path="/var/lib/kubelet/pods/e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291/volumes" Jan 27 18:28:24 crc kubenswrapper[5049]: I0127 18:28:24.646608 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:28:24 crc kubenswrapper[5049]: E0127 18:28:24.647161 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.736612 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8xwg6"] Jan 27 18:28:33 crc kubenswrapper[5049]: E0127 18:28:33.737715 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerName="registry-server" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.737733 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerName="registry-server" Jan 27 18:28:33 crc kubenswrapper[5049]: E0127 18:28:33.737766 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerName="extract-content" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.737774 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerName="extract-content" Jan 27 18:28:33 crc kubenswrapper[5049]: E0127 18:28:33.737783 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerName="extract-utilities" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.737791 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerName="extract-utilities" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.738005 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a189a0-a6ee-4c4f-9c3a-fe3b56d3b291" containerName="registry-server" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.738711 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.745462 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-abd8-account-create-update-nxr56"] Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.747377 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.748800 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.754562 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8xwg6"] Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.780667 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-abd8-account-create-update-nxr56"] Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.855617 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk2pw\" (UniqueName: \"kubernetes.io/projected/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-kube-api-access-wk2pw\") pod \"barbican-db-create-8xwg6\" (UID: \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\") " pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.855713 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d669\" (UniqueName: \"kubernetes.io/projected/b24494b5-89ad-44dc-a138-69241e3c1e5b-kube-api-access-7d669\") pod \"barbican-abd8-account-create-update-nxr56\" (UID: \"b24494b5-89ad-44dc-a138-69241e3c1e5b\") " pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.855946 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-operator-scripts\") pod \"barbican-db-create-8xwg6\" (UID: \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\") " pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.856045 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24494b5-89ad-44dc-a138-69241e3c1e5b-operator-scripts\") pod \"barbican-abd8-account-create-update-nxr56\" (UID: \"b24494b5-89ad-44dc-a138-69241e3c1e5b\") " pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.957822 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk2pw\" (UniqueName: \"kubernetes.io/projected/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-kube-api-access-wk2pw\") pod \"barbican-db-create-8xwg6\" (UID: \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\") " pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.957874 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d669\" (UniqueName: \"kubernetes.io/projected/b24494b5-89ad-44dc-a138-69241e3c1e5b-kube-api-access-7d669\") pod \"barbican-abd8-account-create-update-nxr56\" (UID: \"b24494b5-89ad-44dc-a138-69241e3c1e5b\") " pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.958110 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-operator-scripts\") pod \"barbican-db-create-8xwg6\" (UID: \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\") " pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.958154 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24494b5-89ad-44dc-a138-69241e3c1e5b-operator-scripts\") pod \"barbican-abd8-account-create-update-nxr56\" (UID: \"b24494b5-89ad-44dc-a138-69241e3c1e5b\") " pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.958914 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-operator-scripts\") pod \"barbican-db-create-8xwg6\" (UID: \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\") " pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.958924 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24494b5-89ad-44dc-a138-69241e3c1e5b-operator-scripts\") pod \"barbican-abd8-account-create-update-nxr56\" (UID: \"b24494b5-89ad-44dc-a138-69241e3c1e5b\") " pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.977521 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk2pw\" (UniqueName: \"kubernetes.io/projected/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-kube-api-access-wk2pw\") pod \"barbican-db-create-8xwg6\" (UID: \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\") " pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:33 crc kubenswrapper[5049]: I0127 18:28:33.981052 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d669\" (UniqueName: \"kubernetes.io/projected/b24494b5-89ad-44dc-a138-69241e3c1e5b-kube-api-access-7d669\") pod \"barbican-abd8-account-create-update-nxr56\" (UID: \"b24494b5-89ad-44dc-a138-69241e3c1e5b\") " pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:34 crc kubenswrapper[5049]: I0127 18:28:34.073602 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:34 crc kubenswrapper[5049]: I0127 18:28:34.084697 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:34 crc kubenswrapper[5049]: I0127 18:28:34.541744 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8xwg6"] Jan 27 18:28:34 crc kubenswrapper[5049]: I0127 18:28:34.603119 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-abd8-account-create-update-nxr56"] Jan 27 18:28:35 crc kubenswrapper[5049]: I0127 18:28:35.019166 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8xwg6" event={"ID":"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6","Type":"ContainerStarted","Data":"330af43bb38f6e7342cddde186c012e2d3f83d87f155f20e4bb457cac0424393"} Jan 27 18:28:35 crc kubenswrapper[5049]: I0127 18:28:35.019569 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8xwg6" event={"ID":"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6","Type":"ContainerStarted","Data":"9dc5d2c5400422390aeb6d38c71f0b01f81fd6d6104db13cf01792642bd6de28"} Jan 27 18:28:35 crc kubenswrapper[5049]: I0127 18:28:35.020759 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-abd8-account-create-update-nxr56" event={"ID":"b24494b5-89ad-44dc-a138-69241e3c1e5b","Type":"ContainerStarted","Data":"a6801024e17d24343d267d28c967cd5bcbe0d22d5c4282d1fb0670a7b4a86677"} Jan 27 18:28:35 crc kubenswrapper[5049]: I0127 18:28:35.020805 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-abd8-account-create-update-nxr56" event={"ID":"b24494b5-89ad-44dc-a138-69241e3c1e5b","Type":"ContainerStarted","Data":"eca9c99d97fc47badbda79567630fdf16bff8214774ab18c10c3a6ce41dc2a56"} Jan 27 18:28:35 crc kubenswrapper[5049]: I0127 18:28:35.037658 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-8xwg6" podStartSLOduration=2.037388433 podStartE2EDuration="2.037388433s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:35.032917867 +0000 UTC m=+5490.131891426" watchObservedRunningTime="2026-01-27 18:28:35.037388433 +0000 UTC m=+5490.136361992" Jan 27 18:28:35 crc kubenswrapper[5049]: I0127 18:28:35.055408 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-abd8-account-create-update-nxr56" podStartSLOduration=2.05539025 podStartE2EDuration="2.05539025s" podCreationTimestamp="2026-01-27 18:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:35.050090161 +0000 UTC m=+5490.149063740" watchObservedRunningTime="2026-01-27 18:28:35.05539025 +0000 UTC m=+5490.154363799" Jan 27 18:28:36 crc kubenswrapper[5049]: I0127 18:28:36.033368 5049 generic.go:334] "Generic (PLEG): container finished" podID="b24494b5-89ad-44dc-a138-69241e3c1e5b" containerID="a6801024e17d24343d267d28c967cd5bcbe0d22d5c4282d1fb0670a7b4a86677" exitCode=0 Jan 27 18:28:36 crc kubenswrapper[5049]: I0127 18:28:36.033447 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-abd8-account-create-update-nxr56" event={"ID":"b24494b5-89ad-44dc-a138-69241e3c1e5b","Type":"ContainerDied","Data":"a6801024e17d24343d267d28c967cd5bcbe0d22d5c4282d1fb0670a7b4a86677"} Jan 27 18:28:36 crc kubenswrapper[5049]: I0127 18:28:36.035819 5049 generic.go:334] "Generic (PLEG): container finished" podID="6072ab49-faed-4bfd-a9f1-7c2bf042e5a6" containerID="330af43bb38f6e7342cddde186c012e2d3f83d87f155f20e4bb457cac0424393" exitCode=0 Jan 27 18:28:36 crc kubenswrapper[5049]: I0127 18:28:36.035891 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8xwg6" event={"ID":"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6","Type":"ContainerDied","Data":"330af43bb38f6e7342cddde186c012e2d3f83d87f155f20e4bb457cac0424393"} Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.389521 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.402500 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.530492 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24494b5-89ad-44dc-a138-69241e3c1e5b-operator-scripts\") pod \"b24494b5-89ad-44dc-a138-69241e3c1e5b\" (UID: \"b24494b5-89ad-44dc-a138-69241e3c1e5b\") " Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.531064 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-operator-scripts\") pod \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\" (UID: \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\") " Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.531133 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d669\" (UniqueName: \"kubernetes.io/projected/b24494b5-89ad-44dc-a138-69241e3c1e5b-kube-api-access-7d669\") pod \"b24494b5-89ad-44dc-a138-69241e3c1e5b\" (UID: \"b24494b5-89ad-44dc-a138-69241e3c1e5b\") " Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.531219 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk2pw\" (UniqueName: \"kubernetes.io/projected/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-kube-api-access-wk2pw\") pod \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\" (UID: \"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6\") " Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.531820 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6072ab49-faed-4bfd-a9f1-7c2bf042e5a6" (UID: "6072ab49-faed-4bfd-a9f1-7c2bf042e5a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.531853 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24494b5-89ad-44dc-a138-69241e3c1e5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b24494b5-89ad-44dc-a138-69241e3c1e5b" (UID: "b24494b5-89ad-44dc-a138-69241e3c1e5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.537474 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-kube-api-access-wk2pw" (OuterVolumeSpecName: "kube-api-access-wk2pw") pod "6072ab49-faed-4bfd-a9f1-7c2bf042e5a6" (UID: "6072ab49-faed-4bfd-a9f1-7c2bf042e5a6"). InnerVolumeSpecName "kube-api-access-wk2pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.539129 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24494b5-89ad-44dc-a138-69241e3c1e5b-kube-api-access-7d669" (OuterVolumeSpecName: "kube-api-access-7d669") pod "b24494b5-89ad-44dc-a138-69241e3c1e5b" (UID: "b24494b5-89ad-44dc-a138-69241e3c1e5b"). InnerVolumeSpecName "kube-api-access-7d669". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.633736 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk2pw\" (UniqueName: \"kubernetes.io/projected/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-kube-api-access-wk2pw\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.633785 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24494b5-89ad-44dc-a138-69241e3c1e5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.633806 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:37 crc kubenswrapper[5049]: I0127 18:28:37.633825 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d669\" (UniqueName: \"kubernetes.io/projected/b24494b5-89ad-44dc-a138-69241e3c1e5b-kube-api-access-7d669\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:38 crc kubenswrapper[5049]: I0127 18:28:38.054726 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8xwg6" Jan 27 18:28:38 crc kubenswrapper[5049]: I0127 18:28:38.056424 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8xwg6" event={"ID":"6072ab49-faed-4bfd-a9f1-7c2bf042e5a6","Type":"ContainerDied","Data":"9dc5d2c5400422390aeb6d38c71f0b01f81fd6d6104db13cf01792642bd6de28"} Jan 27 18:28:38 crc kubenswrapper[5049]: I0127 18:28:38.056483 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc5d2c5400422390aeb6d38c71f0b01f81fd6d6104db13cf01792642bd6de28" Jan 27 18:28:38 crc kubenswrapper[5049]: I0127 18:28:38.060516 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-abd8-account-create-update-nxr56" event={"ID":"b24494b5-89ad-44dc-a138-69241e3c1e5b","Type":"ContainerDied","Data":"eca9c99d97fc47badbda79567630fdf16bff8214774ab18c10c3a6ce41dc2a56"} Jan 27 18:28:38 crc kubenswrapper[5049]: I0127 18:28:38.060545 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca9c99d97fc47badbda79567630fdf16bff8214774ab18c10c3a6ce41dc2a56" Jan 27 18:28:38 crc kubenswrapper[5049]: I0127 18:28:38.060607 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-abd8-account-create-update-nxr56" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.055326 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2cgdj"] Jan 27 18:28:39 crc kubenswrapper[5049]: E0127 18:28:39.056160 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24494b5-89ad-44dc-a138-69241e3c1e5b" containerName="mariadb-account-create-update" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.056179 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24494b5-89ad-44dc-a138-69241e3c1e5b" containerName="mariadb-account-create-update" Jan 27 18:28:39 crc kubenswrapper[5049]: E0127 18:28:39.056198 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6072ab49-faed-4bfd-a9f1-7c2bf042e5a6" containerName="mariadb-database-create" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.056206 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6072ab49-faed-4bfd-a9f1-7c2bf042e5a6" containerName="mariadb-database-create" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.056400 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6072ab49-faed-4bfd-a9f1-7c2bf042e5a6" containerName="mariadb-database-create" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.056416 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24494b5-89ad-44dc-a138-69241e3c1e5b" containerName="mariadb-account-create-update" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.057283 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.060209 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.061156 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2w4sk" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.066827 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2cgdj"] Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.158187 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-db-sync-config-data\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.158282 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh5sd\" (UniqueName: \"kubernetes.io/projected/c6cce11c-d752-4e6a-8df3-d4262505bb1f-kube-api-access-nh5sd\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.158365 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-combined-ca-bundle\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.260094 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-db-sync-config-data\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.260199 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh5sd\" (UniqueName: \"kubernetes.io/projected/c6cce11c-d752-4e6a-8df3-d4262505bb1f-kube-api-access-nh5sd\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.260287 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-combined-ca-bundle\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.265944 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-db-sync-config-data\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.265978 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-combined-ca-bundle\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.290318 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh5sd\" (UniqueName: \"kubernetes.io/projected/c6cce11c-d752-4e6a-8df3-d4262505bb1f-kube-api-access-nh5sd\") pod \"barbican-db-sync-2cgdj\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.391545 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.651407 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:28:39 crc kubenswrapper[5049]: E0127 18:28:39.652364 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:28:39 crc kubenswrapper[5049]: I0127 18:28:39.857349 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2cgdj"] Jan 27 18:28:40 crc kubenswrapper[5049]: I0127 18:28:40.096763 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2cgdj" event={"ID":"c6cce11c-d752-4e6a-8df3-d4262505bb1f","Type":"ContainerStarted","Data":"316a0979ed5d061a46353c9f222a71986b0695bdef2e05a687e5574b2c68ffc4"} Jan 27 18:28:40 crc kubenswrapper[5049]: I0127 18:28:40.097957 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2cgdj" event={"ID":"c6cce11c-d752-4e6a-8df3-d4262505bb1f","Type":"ContainerStarted","Data":"7c41921ada1c27fa3d9cc829c251519197972156071a6f2384ba29e3a0ad7020"} Jan 27 18:28:40 crc kubenswrapper[5049]: I0127 18:28:40.113635 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2cgdj" podStartSLOduration=1.113616001 podStartE2EDuration="1.113616001s" podCreationTimestamp="2026-01-27 18:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:40.113135207 +0000 UTC m=+5495.212108756" watchObservedRunningTime="2026-01-27 18:28:40.113616001 +0000 UTC m=+5495.212589550" Jan 27 18:28:42 crc kubenswrapper[5049]: I0127 18:28:42.114121 5049 generic.go:334] "Generic (PLEG): container finished" podID="c6cce11c-d752-4e6a-8df3-d4262505bb1f" containerID="316a0979ed5d061a46353c9f222a71986b0695bdef2e05a687e5574b2c68ffc4" exitCode=0 Jan 27 18:28:42 crc kubenswrapper[5049]: I0127 18:28:42.114161 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2cgdj" event={"ID":"c6cce11c-d752-4e6a-8df3-d4262505bb1f","Type":"ContainerDied","Data":"316a0979ed5d061a46353c9f222a71986b0695bdef2e05a687e5574b2c68ffc4"} Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.421275 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.538485 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-combined-ca-bundle\") pod \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.538568 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh5sd\" (UniqueName: \"kubernetes.io/projected/c6cce11c-d752-4e6a-8df3-d4262505bb1f-kube-api-access-nh5sd\") pod \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.538634 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-db-sync-config-data\") pod \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\" (UID: \"c6cce11c-d752-4e6a-8df3-d4262505bb1f\") " Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.544748 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cce11c-d752-4e6a-8df3-d4262505bb1f-kube-api-access-nh5sd" (OuterVolumeSpecName: "kube-api-access-nh5sd") pod "c6cce11c-d752-4e6a-8df3-d4262505bb1f" (UID: "c6cce11c-d752-4e6a-8df3-d4262505bb1f"). InnerVolumeSpecName "kube-api-access-nh5sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.547643 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c6cce11c-d752-4e6a-8df3-d4262505bb1f" (UID: "c6cce11c-d752-4e6a-8df3-d4262505bb1f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.563722 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6cce11c-d752-4e6a-8df3-d4262505bb1f" (UID: "c6cce11c-d752-4e6a-8df3-d4262505bb1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.641121 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.641177 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh5sd\" (UniqueName: \"kubernetes.io/projected/c6cce11c-d752-4e6a-8df3-d4262505bb1f-kube-api-access-nh5sd\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:43 crc kubenswrapper[5049]: I0127 18:28:43.641190 5049 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c6cce11c-d752-4e6a-8df3-d4262505bb1f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.133457 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2cgdj" event={"ID":"c6cce11c-d752-4e6a-8df3-d4262505bb1f","Type":"ContainerDied","Data":"7c41921ada1c27fa3d9cc829c251519197972156071a6f2384ba29e3a0ad7020"} Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.133500 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c41921ada1c27fa3d9cc829c251519197972156071a6f2384ba29e3a0ad7020" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.133563 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2cgdj" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.367577 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-979c4cfc7-x5nfk"] Jan 27 18:28:44 crc kubenswrapper[5049]: E0127 18:28:44.367978 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cce11c-d752-4e6a-8df3-d4262505bb1f" containerName="barbican-db-sync" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.368001 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cce11c-d752-4e6a-8df3-d4262505bb1f" containerName="barbican-db-sync" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.368175 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6cce11c-d752-4e6a-8df3-d4262505bb1f" containerName="barbican-db-sync" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.369071 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.374923 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.375276 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2w4sk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.379332 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.389092 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-979c4cfc7-x5nfk"] Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.423579 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-55c49fb878-2ctcc"] Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.425583 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.427928 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.444074 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55c49fb878-2ctcc"] Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.455710 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-config-data-custom\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.455764 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-config-data\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.455800 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ch8b\" (UniqueName: \"kubernetes.io/projected/103768dd-1e58-4bab-88df-808576121cb4-kube-api-access-6ch8b\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.455881 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103768dd-1e58-4bab-88df-808576121cb4-logs\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.455949 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-combined-ca-bundle\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.504735 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67844db599-2245g"] Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.506258 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.520605 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67844db599-2245g"] Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.558279 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-config-data-custom\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.558359 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103768dd-1e58-4bab-88df-808576121cb4-logs\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.558871 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-config\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.558898 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-nb\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.558922 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-combined-ca-bundle\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.558904 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103768dd-1e58-4bab-88df-808576121cb4-logs\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559163 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-combined-ca-bundle\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559243 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-sb\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559334 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62hn\" (UniqueName: \"kubernetes.io/projected/39e3eb69-6b02-4fc4-9015-012bca2924b8-kube-api-access-m62hn\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559368 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-config-data\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559414 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9142e28-4c00-4dc0-b0f0-0370cd638740-logs\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559454 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-config-data-custom\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559478 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-dns-svc\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559499 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-config-data\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559516 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc4ll\" (UniqueName: \"kubernetes.io/projected/e9142e28-4c00-4dc0-b0f0-0370cd638740-kube-api-access-tc4ll\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.559547 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ch8b\" (UniqueName: \"kubernetes.io/projected/103768dd-1e58-4bab-88df-808576121cb4-kube-api-access-6ch8b\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.565049 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-config-data-custom\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.567992 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-combined-ca-bundle\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.575463 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103768dd-1e58-4bab-88df-808576121cb4-config-data\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.580389 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ch8b\" (UniqueName: \"kubernetes.io/projected/103768dd-1e58-4bab-88df-808576121cb4-kube-api-access-6ch8b\") pod \"barbican-worker-979c4cfc7-x5nfk\" (UID: \"103768dd-1e58-4bab-88df-808576121cb4\") " pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.624968 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59c589b678-6wqpn"] Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.626469 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.629756 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.644847 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c589b678-6wqpn"] Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.660761 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-config\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.660829 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-nb\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.660872 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-combined-ca-bundle\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.660908 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-sb\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.660956 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m62hn\" (UniqueName: \"kubernetes.io/projected/39e3eb69-6b02-4fc4-9015-012bca2924b8-kube-api-access-m62hn\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.660981 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-config-data\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.661013 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9142e28-4c00-4dc0-b0f0-0370cd638740-logs\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.661049 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-dns-svc\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.661079 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc4ll\" (UniqueName: \"kubernetes.io/projected/e9142e28-4c00-4dc0-b0f0-0370cd638740-kube-api-access-tc4ll\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.661134 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-config-data-custom\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.662249 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9142e28-4c00-4dc0-b0f0-0370cd638740-logs\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.663010 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-config\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.663739 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-sb\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.665365 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-dns-svc\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.666762 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-nb\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.681553 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-config-data-custom\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.682693 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-combined-ca-bundle\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.683005 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9142e28-4c00-4dc0-b0f0-0370cd638740-config-data\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.685257 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc4ll\" (UniqueName: \"kubernetes.io/projected/e9142e28-4c00-4dc0-b0f0-0370cd638740-kube-api-access-tc4ll\") pod \"barbican-keystone-listener-55c49fb878-2ctcc\" (UID: \"e9142e28-4c00-4dc0-b0f0-0370cd638740\") " pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.685640 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m62hn\" (UniqueName: \"kubernetes.io/projected/39e3eb69-6b02-4fc4-9015-012bca2924b8-kube-api-access-m62hn\") pod \"dnsmasq-dns-67844db599-2245g\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.693988 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-979c4cfc7-x5nfk" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.757366 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.769185 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-config-data\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.769931 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-config-data-custom\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.771491 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtlmj\" (UniqueName: \"kubernetes.io/projected/21fc71b8-47e1-410a-aa00-1e365cca5af7-kube-api-access-rtlmj\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.771855 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21fc71b8-47e1-410a-aa00-1e365cca5af7-logs\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.772025 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-combined-ca-bundle\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.846087 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.874612 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-combined-ca-bundle\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.874717 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-config-data\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.874792 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-config-data-custom\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.874833 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtlmj\" (UniqueName: \"kubernetes.io/projected/21fc71b8-47e1-410a-aa00-1e365cca5af7-kube-api-access-rtlmj\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.874858 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21fc71b8-47e1-410a-aa00-1e365cca5af7-logs\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.875574 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21fc71b8-47e1-410a-aa00-1e365cca5af7-logs\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.881384 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-combined-ca-bundle\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.881963 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-config-data-custom\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.882390 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21fc71b8-47e1-410a-aa00-1e365cca5af7-config-data\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.900377 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtlmj\" (UniqueName: \"kubernetes.io/projected/21fc71b8-47e1-410a-aa00-1e365cca5af7-kube-api-access-rtlmj\") pod \"barbican-api-59c589b678-6wqpn\" (UID: \"21fc71b8-47e1-410a-aa00-1e365cca5af7\") " pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:44 crc kubenswrapper[5049]: I0127 18:28:44.949243 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:45 crc kubenswrapper[5049]: I0127 18:28:45.179865 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67844db599-2245g"] Jan 27 18:28:45 crc kubenswrapper[5049]: I0127 18:28:45.189908 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-979c4cfc7-x5nfk"] Jan 27 18:28:45 crc kubenswrapper[5049]: I0127 18:28:45.311567 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-55c49fb878-2ctcc"] Jan 27 18:28:45 crc kubenswrapper[5049]: I0127 18:28:45.458796 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59c589b678-6wqpn"] Jan 27 18:28:45 crc kubenswrapper[5049]: W0127 18:28:45.518303 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21fc71b8_47e1_410a_aa00_1e365cca5af7.slice/crio-5c5ab5ba32b5372d57d66718f58fde923c8496bcfddbf7431d37dea0bc8ff15a WatchSource:0}: Error finding container 5c5ab5ba32b5372d57d66718f58fde923c8496bcfddbf7431d37dea0bc8ff15a: Status 404 returned error can't find the container with id 5c5ab5ba32b5372d57d66718f58fde923c8496bcfddbf7431d37dea0bc8ff15a Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.157826 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-979c4cfc7-x5nfk" event={"ID":"103768dd-1e58-4bab-88df-808576121cb4","Type":"ContainerStarted","Data":"056bfb37baae4b2c49d16f8e7553dfe649911bf5950a6e9def3f44e3996899be"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.158210 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-979c4cfc7-x5nfk" event={"ID":"103768dd-1e58-4bab-88df-808576121cb4","Type":"ContainerStarted","Data":"895a9629fa476e7e834aff0fbf68e5fd48985dad9b9e158d5a6ed8a36666a1f1"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.158228 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-979c4cfc7-x5nfk" event={"ID":"103768dd-1e58-4bab-88df-808576121cb4","Type":"ContainerStarted","Data":"d169a5af4588c7f8bd19cb000e09d98278ee207eadcfd7df3261a01ee8300e7a"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.160872 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" event={"ID":"e9142e28-4c00-4dc0-b0f0-0370cd638740","Type":"ContainerStarted","Data":"610f87e0561d92f096d677ff0629cda174ce649378acb5b93266d503b6e4bce7"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.161059 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" event={"ID":"e9142e28-4c00-4dc0-b0f0-0370cd638740","Type":"ContainerStarted","Data":"e0672aba02cb41555d2eeaf3b0718f588395a6eb1f15a322895e7d60095d85c1"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.161148 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" event={"ID":"e9142e28-4c00-4dc0-b0f0-0370cd638740","Type":"ContainerStarted","Data":"cb017abd88b44e002d3a105c8d747cfc8adbd1e4af820d7e24ca0c3047498693"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.164534 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c589b678-6wqpn" event={"ID":"21fc71b8-47e1-410a-aa00-1e365cca5af7","Type":"ContainerStarted","Data":"c29b29f3b16b01c9981c4b19086d874c0a13f9f536fafaa440fabcf95b6c4fa9"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.164754 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c589b678-6wqpn" event={"ID":"21fc71b8-47e1-410a-aa00-1e365cca5af7","Type":"ContainerStarted","Data":"b91e4021e8134895bf9d5d4505e946e3a27bf70c6ea65e0790b6ef3422671da6"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.164874 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59c589b678-6wqpn" event={"ID":"21fc71b8-47e1-410a-aa00-1e365cca5af7","Type":"ContainerStarted","Data":"5c5ab5ba32b5372d57d66718f58fde923c8496bcfddbf7431d37dea0bc8ff15a"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.165884 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.165997 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.170042 5049 generic.go:334] "Generic (PLEG): container finished" podID="39e3eb69-6b02-4fc4-9015-012bca2924b8" containerID="d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383" exitCode=0 Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.170269 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67844db599-2245g" event={"ID":"39e3eb69-6b02-4fc4-9015-012bca2924b8","Type":"ContainerDied","Data":"d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.170394 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67844db599-2245g" event={"ID":"39e3eb69-6b02-4fc4-9015-012bca2924b8","Type":"ContainerStarted","Data":"232c3d1075a14783b00819128e588601641fcfa8c9a26c5cccb3673964b75865"} Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.181306 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-979c4cfc7-x5nfk" podStartSLOduration=2.181285785 podStartE2EDuration="2.181285785s" podCreationTimestamp="2026-01-27 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:46.174096552 +0000 UTC m=+5501.273070111" watchObservedRunningTime="2026-01-27 18:28:46.181285785 +0000 UTC m=+5501.280259334" Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.237770 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59c589b678-6wqpn" podStartSLOduration=2.2377498989999998 podStartE2EDuration="2.237749899s" podCreationTimestamp="2026-01-27 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:46.228347823 +0000 UTC m=+5501.327321372" watchObservedRunningTime="2026-01-27 18:28:46.237749899 +0000 UTC m=+5501.336723448" Jan 27 18:28:46 crc kubenswrapper[5049]: I0127 18:28:46.288008 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-55c49fb878-2ctcc" podStartSLOduration=2.287992817 podStartE2EDuration="2.287992817s" podCreationTimestamp="2026-01-27 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:46.276449452 +0000 UTC m=+5501.375423001" watchObservedRunningTime="2026-01-27 18:28:46.287992817 +0000 UTC m=+5501.386966366" Jan 27 18:28:47 crc kubenswrapper[5049]: I0127 18:28:47.178151 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67844db599-2245g" event={"ID":"39e3eb69-6b02-4fc4-9015-012bca2924b8","Type":"ContainerStarted","Data":"5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc"} Jan 27 18:28:47 crc kubenswrapper[5049]: I0127 18:28:47.205498 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67844db599-2245g" podStartSLOduration=3.205479241 podStartE2EDuration="3.205479241s" podCreationTimestamp="2026-01-27 18:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:28:47.2022356 +0000 UTC m=+5502.301209159" watchObservedRunningTime="2026-01-27 18:28:47.205479241 +0000 UTC m=+5502.304452780" Jan 27 18:28:48 crc kubenswrapper[5049]: I0127 18:28:48.186651 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:50 crc kubenswrapper[5049]: I0127 18:28:50.646996 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:28:50 crc kubenswrapper[5049]: E0127 18:28:50.647957 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:28:54 crc kubenswrapper[5049]: I0127 18:28:54.848844 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:28:54 crc kubenswrapper[5049]: I0127 18:28:54.947861 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9db86f55-6jp76"] Jan 27 18:28:54 crc kubenswrapper[5049]: I0127 18:28:54.948114 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" podUID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" containerName="dnsmasq-dns" containerID="cri-o://a720b51cb41199d70274684da5e050f320a48ffe3ab4716df1a4c4cc98087d64" gracePeriod=10 Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.303544 5049 generic.go:334] "Generic (PLEG): container finished" podID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" containerID="a720b51cb41199d70274684da5e050f320a48ffe3ab4716df1a4c4cc98087d64" exitCode=0 Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.303588 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" event={"ID":"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a","Type":"ContainerDied","Data":"a720b51cb41199d70274684da5e050f320a48ffe3ab4716df1a4c4cc98087d64"} Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.565059 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.701409 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-nb\") pod \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.701494 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-sb\") pod \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.701553 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-dns-svc\") pod \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.701700 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-config\") pod \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.701784 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgllq\" (UniqueName: \"kubernetes.io/projected/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-kube-api-access-kgllq\") pod \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\" (UID: \"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a\") " Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.708467 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-kube-api-access-kgllq" (OuterVolumeSpecName: "kube-api-access-kgllq") pod "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" (UID: "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a"). InnerVolumeSpecName "kube-api-access-kgllq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.748805 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" (UID: "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.750392 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-config" (OuterVolumeSpecName: "config") pod "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" (UID: "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.751386 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" (UID: "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.762780 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" (UID: "a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.804389 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgllq\" (UniqueName: \"kubernetes.io/projected/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-kube-api-access-kgllq\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.804432 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.804481 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.804516 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:55 crc kubenswrapper[5049]: I0127 18:28:55.804629 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:28:56 crc kubenswrapper[5049]: I0127 18:28:56.314389 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" event={"ID":"a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a","Type":"ContainerDied","Data":"4a5e4073459e33ba765dc06da6b9c62c30470cc03d3a9b314c8b1de89bca81d0"} Jan 27 18:28:56 crc kubenswrapper[5049]: I0127 18:28:56.314470 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9db86f55-6jp76" Jan 27 18:28:56 crc kubenswrapper[5049]: I0127 18:28:56.314739 5049 scope.go:117] "RemoveContainer" containerID="a720b51cb41199d70274684da5e050f320a48ffe3ab4716df1a4c4cc98087d64" Jan 27 18:28:56 crc kubenswrapper[5049]: I0127 18:28:56.336316 5049 scope.go:117] "RemoveContainer" containerID="a676ff77ef2f6cf4af6ec85076d652b0ae08215c6769deedaffefd1cd057d538" Jan 27 18:28:56 crc kubenswrapper[5049]: I0127 18:28:56.363160 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9db86f55-6jp76"] Jan 27 18:28:56 crc kubenswrapper[5049]: I0127 18:28:56.373367 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9db86f55-6jp76"] Jan 27 18:28:56 crc kubenswrapper[5049]: I0127 18:28:56.636453 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:56 crc kubenswrapper[5049]: I0127 18:28:56.733506 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59c589b678-6wqpn" Jan 27 18:28:57 crc kubenswrapper[5049]: I0127 18:28:57.657596 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" path="/var/lib/kubelet/pods/a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a/volumes" Jan 27 18:29:04 crc kubenswrapper[5049]: I0127 18:29:04.646086 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:29:04 crc kubenswrapper[5049]: E0127 18:29:04.646840 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.551303 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rs229"] Jan 27 18:29:07 crc kubenswrapper[5049]: E0127 18:29:07.551930 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" containerName="init" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.551942 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" containerName="init" Jan 27 18:29:07 crc kubenswrapper[5049]: E0127 18:29:07.551953 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" containerName="dnsmasq-dns" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.551965 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" containerName="dnsmasq-dns" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.552131 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fd3783-f1ee-44c0-b1ab-4023bf0c7d5a" containerName="dnsmasq-dns" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.552805 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rs229" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.564847 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rs229"] Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.636095 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c06805d-2324-478a-97a3-c8b6bbaf12f5-operator-scripts\") pod \"neutron-db-create-rs229\" (UID: \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\") " pod="openstack/neutron-db-create-rs229" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.636161 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jct8q\" (UniqueName: \"kubernetes.io/projected/4c06805d-2324-478a-97a3-c8b6bbaf12f5-kube-api-access-jct8q\") pod \"neutron-db-create-rs229\" (UID: \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\") " pod="openstack/neutron-db-create-rs229" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.660017 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9a3e-account-create-update-zdnpx"] Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.661535 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.663947 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.697291 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9a3e-account-create-update-zdnpx"] Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.737974 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jct8q\" (UniqueName: \"kubernetes.io/projected/4c06805d-2324-478a-97a3-c8b6bbaf12f5-kube-api-access-jct8q\") pod \"neutron-db-create-rs229\" (UID: \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\") " pod="openstack/neutron-db-create-rs229" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.738205 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbjn\" (UniqueName: \"kubernetes.io/projected/bcea57ac-4299-451d-b94c-e0fe7457439b-kube-api-access-zkbjn\") pod \"neutron-9a3e-account-create-update-zdnpx\" (UID: \"bcea57ac-4299-451d-b94c-e0fe7457439b\") " pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.738304 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcea57ac-4299-451d-b94c-e0fe7457439b-operator-scripts\") pod \"neutron-9a3e-account-create-update-zdnpx\" (UID: \"bcea57ac-4299-451d-b94c-e0fe7457439b\") " pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.738334 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c06805d-2324-478a-97a3-c8b6bbaf12f5-operator-scripts\") pod \"neutron-db-create-rs229\" (UID: \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\") " pod="openstack/neutron-db-create-rs229" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.739524 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c06805d-2324-478a-97a3-c8b6bbaf12f5-operator-scripts\") pod \"neutron-db-create-rs229\" (UID: \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\") " pod="openstack/neutron-db-create-rs229" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.761505 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jct8q\" (UniqueName: \"kubernetes.io/projected/4c06805d-2324-478a-97a3-c8b6bbaf12f5-kube-api-access-jct8q\") pod \"neutron-db-create-rs229\" (UID: \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\") " pod="openstack/neutron-db-create-rs229" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.840048 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcea57ac-4299-451d-b94c-e0fe7457439b-operator-scripts\") pod \"neutron-9a3e-account-create-update-zdnpx\" (UID: \"bcea57ac-4299-451d-b94c-e0fe7457439b\") " pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.840465 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkbjn\" (UniqueName: \"kubernetes.io/projected/bcea57ac-4299-451d-b94c-e0fe7457439b-kube-api-access-zkbjn\") pod \"neutron-9a3e-account-create-update-zdnpx\" (UID: \"bcea57ac-4299-451d-b94c-e0fe7457439b\") " pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.841204 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcea57ac-4299-451d-b94c-e0fe7457439b-operator-scripts\") pod \"neutron-9a3e-account-create-update-zdnpx\" (UID: \"bcea57ac-4299-451d-b94c-e0fe7457439b\") " pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.858466 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkbjn\" (UniqueName: \"kubernetes.io/projected/bcea57ac-4299-451d-b94c-e0fe7457439b-kube-api-access-zkbjn\") pod \"neutron-9a3e-account-create-update-zdnpx\" (UID: \"bcea57ac-4299-451d-b94c-e0fe7457439b\") " pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.872771 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rs229" Jan 27 18:29:07 crc kubenswrapper[5049]: I0127 18:29:07.984155 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:08 crc kubenswrapper[5049]: I0127 18:29:08.332730 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rs229"] Jan 27 18:29:08 crc kubenswrapper[5049]: I0127 18:29:08.475556 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rs229" event={"ID":"4c06805d-2324-478a-97a3-c8b6bbaf12f5","Type":"ContainerStarted","Data":"c5d3a8ea11057fe6dce0c0ab754bcbb2705b0ed455a5852cc670028ef60bb7c8"} Jan 27 18:29:08 crc kubenswrapper[5049]: I0127 18:29:08.484730 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9a3e-account-create-update-zdnpx"] Jan 27 18:29:08 crc kubenswrapper[5049]: W0127 18:29:08.494084 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcea57ac_4299_451d_b94c_e0fe7457439b.slice/crio-38540411c5da611d23851b035e8f129bcd9d9ba7243e9f65fa15d8fc628772a6 WatchSource:0}: Error finding container 38540411c5da611d23851b035e8f129bcd9d9ba7243e9f65fa15d8fc628772a6: Status 404 returned error can't find the container with id 38540411c5da611d23851b035e8f129bcd9d9ba7243e9f65fa15d8fc628772a6 Jan 27 18:29:09 crc kubenswrapper[5049]: I0127 18:29:09.489478 5049 generic.go:334] "Generic (PLEG): container finished" podID="bcea57ac-4299-451d-b94c-e0fe7457439b" containerID="4798ec7b8ecc5358270429c899f0c1d5095ab9670af65023065e093a09345f29" exitCode=0 Jan 27 18:29:09 crc kubenswrapper[5049]: I0127 18:29:09.489566 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9a3e-account-create-update-zdnpx" event={"ID":"bcea57ac-4299-451d-b94c-e0fe7457439b","Type":"ContainerDied","Data":"4798ec7b8ecc5358270429c899f0c1d5095ab9670af65023065e093a09345f29"} Jan 27 18:29:09 crc kubenswrapper[5049]: I0127 18:29:09.489837 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9a3e-account-create-update-zdnpx" event={"ID":"bcea57ac-4299-451d-b94c-e0fe7457439b","Type":"ContainerStarted","Data":"38540411c5da611d23851b035e8f129bcd9d9ba7243e9f65fa15d8fc628772a6"} Jan 27 18:29:09 crc kubenswrapper[5049]: I0127 18:29:09.494272 5049 generic.go:334] "Generic (PLEG): container finished" podID="4c06805d-2324-478a-97a3-c8b6bbaf12f5" containerID="ab3a9aadfb96094dc76c2e79d46bbb51f0a3c40eacbdc5eb55d13623977c7b46" exitCode=0 Jan 27 18:29:09 crc kubenswrapper[5049]: I0127 18:29:09.494317 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rs229" event={"ID":"4c06805d-2324-478a-97a3-c8b6bbaf12f5","Type":"ContainerDied","Data":"ab3a9aadfb96094dc76c2e79d46bbb51f0a3c40eacbdc5eb55d13623977c7b46"} Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.877805 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.897792 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcea57ac-4299-451d-b94c-e0fe7457439b-operator-scripts\") pod \"bcea57ac-4299-451d-b94c-e0fe7457439b\" (UID: \"bcea57ac-4299-451d-b94c-e0fe7457439b\") " Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.897960 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkbjn\" (UniqueName: \"kubernetes.io/projected/bcea57ac-4299-451d-b94c-e0fe7457439b-kube-api-access-zkbjn\") pod \"bcea57ac-4299-451d-b94c-e0fe7457439b\" (UID: \"bcea57ac-4299-451d-b94c-e0fe7457439b\") " Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.898331 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcea57ac-4299-451d-b94c-e0fe7457439b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcea57ac-4299-451d-b94c-e0fe7457439b" (UID: "bcea57ac-4299-451d-b94c-e0fe7457439b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.898471 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcea57ac-4299-451d-b94c-e0fe7457439b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.903321 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcea57ac-4299-451d-b94c-e0fe7457439b-kube-api-access-zkbjn" (OuterVolumeSpecName: "kube-api-access-zkbjn") pod "bcea57ac-4299-451d-b94c-e0fe7457439b" (UID: "bcea57ac-4299-451d-b94c-e0fe7457439b"). InnerVolumeSpecName "kube-api-access-zkbjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.978190 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rs229" Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.999621 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jct8q\" (UniqueName: \"kubernetes.io/projected/4c06805d-2324-478a-97a3-c8b6bbaf12f5-kube-api-access-jct8q\") pod \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\" (UID: \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\") " Jan 27 18:29:10 crc kubenswrapper[5049]: I0127 18:29:10.999867 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c06805d-2324-478a-97a3-c8b6bbaf12f5-operator-scripts\") pod \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\" (UID: \"4c06805d-2324-478a-97a3-c8b6bbaf12f5\") " Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.000278 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkbjn\" (UniqueName: \"kubernetes.io/projected/bcea57ac-4299-451d-b94c-e0fe7457439b-kube-api-access-zkbjn\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.000410 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c06805d-2324-478a-97a3-c8b6bbaf12f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c06805d-2324-478a-97a3-c8b6bbaf12f5" (UID: "4c06805d-2324-478a-97a3-c8b6bbaf12f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.005930 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c06805d-2324-478a-97a3-c8b6bbaf12f5-kube-api-access-jct8q" (OuterVolumeSpecName: "kube-api-access-jct8q") pod "4c06805d-2324-478a-97a3-c8b6bbaf12f5" (UID: "4c06805d-2324-478a-97a3-c8b6bbaf12f5"). InnerVolumeSpecName "kube-api-access-jct8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.101338 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c06805d-2324-478a-97a3-c8b6bbaf12f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.101365 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jct8q\" (UniqueName: \"kubernetes.io/projected/4c06805d-2324-478a-97a3-c8b6bbaf12f5-kube-api-access-jct8q\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.513204 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9a3e-account-create-update-zdnpx" Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.513223 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9a3e-account-create-update-zdnpx" event={"ID":"bcea57ac-4299-451d-b94c-e0fe7457439b","Type":"ContainerDied","Data":"38540411c5da611d23851b035e8f129bcd9d9ba7243e9f65fa15d8fc628772a6"} Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.513623 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38540411c5da611d23851b035e8f129bcd9d9ba7243e9f65fa15d8fc628772a6" Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.515005 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rs229" event={"ID":"4c06805d-2324-478a-97a3-c8b6bbaf12f5","Type":"ContainerDied","Data":"c5d3a8ea11057fe6dce0c0ab754bcbb2705b0ed455a5852cc670028ef60bb7c8"} Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.515047 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d3a8ea11057fe6dce0c0ab754bcbb2705b0ed455a5852cc670028ef60bb7c8" Jan 27 18:29:11 crc kubenswrapper[5049]: I0127 18:29:11.515067 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rs229" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.881776 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hjzb7"] Jan 27 18:29:12 crc kubenswrapper[5049]: E0127 18:29:12.882235 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c06805d-2324-478a-97a3-c8b6bbaf12f5" containerName="mariadb-database-create" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.882250 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c06805d-2324-478a-97a3-c8b6bbaf12f5" containerName="mariadb-database-create" Jan 27 18:29:12 crc kubenswrapper[5049]: E0127 18:29:12.882279 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcea57ac-4299-451d-b94c-e0fe7457439b" containerName="mariadb-account-create-update" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.882287 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcea57ac-4299-451d-b94c-e0fe7457439b" containerName="mariadb-account-create-update" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.882503 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcea57ac-4299-451d-b94c-e0fe7457439b" containerName="mariadb-account-create-update" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.882517 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c06805d-2324-478a-97a3-c8b6bbaf12f5" containerName="mariadb-database-create" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.883258 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.887488 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xrjg2" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.889428 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.889729 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 18:29:12 crc kubenswrapper[5049]: I0127 18:29:12.896004 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hjzb7"] Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.033247 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g78k\" (UniqueName: \"kubernetes.io/projected/7085d906-0bf7-4167-9c11-7d2468761de9-kube-api-access-4g78k\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.033456 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-config\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.033967 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-combined-ca-bundle\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.135535 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-combined-ca-bundle\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.135631 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g78k\" (UniqueName: \"kubernetes.io/projected/7085d906-0bf7-4167-9c11-7d2468761de9-kube-api-access-4g78k\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.135717 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-config\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.148626 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-combined-ca-bundle\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.150821 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-config\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.155096 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g78k\" (UniqueName: \"kubernetes.io/projected/7085d906-0bf7-4167-9c11-7d2468761de9-kube-api-access-4g78k\") pod \"neutron-db-sync-hjzb7\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.215048 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:13 crc kubenswrapper[5049]: I0127 18:29:13.686494 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hjzb7"] Jan 27 18:29:14 crc kubenswrapper[5049]: I0127 18:29:14.542468 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hjzb7" event={"ID":"7085d906-0bf7-4167-9c11-7d2468761de9","Type":"ContainerStarted","Data":"5d70e3147c9ccd684ef9de160dae4bfc7946624825106837e68894a2ba3f6d5a"} Jan 27 18:29:14 crc kubenswrapper[5049]: I0127 18:29:14.542775 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hjzb7" event={"ID":"7085d906-0bf7-4167-9c11-7d2468761de9","Type":"ContainerStarted","Data":"348c7fbed60c0e5635d663c92c362042c40bc0681bd8d58d5ca9256809ae7bbc"} Jan 27 18:29:14 crc kubenswrapper[5049]: I0127 18:29:14.569251 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hjzb7" podStartSLOduration=2.5692262809999997 podStartE2EDuration="2.569226281s" podCreationTimestamp="2026-01-27 18:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:14.564107516 +0000 UTC m=+5529.663081075" watchObservedRunningTime="2026-01-27 18:29:14.569226281 +0000 UTC m=+5529.668199830" Jan 27 18:29:16 crc kubenswrapper[5049]: I0127 18:29:16.585157 5049 scope.go:117] "RemoveContainer" containerID="6e4fbd4f0b8940ca283748461c91f4da4b9b6a3512080e4a5f9714d7eac7f27d" Jan 27 18:29:18 crc kubenswrapper[5049]: I0127 18:29:18.579886 5049 generic.go:334] "Generic (PLEG): container finished" podID="7085d906-0bf7-4167-9c11-7d2468761de9" containerID="5d70e3147c9ccd684ef9de160dae4bfc7946624825106837e68894a2ba3f6d5a" exitCode=0 Jan 27 18:29:18 crc kubenswrapper[5049]: I0127 18:29:18.579975 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hjzb7" event={"ID":"7085d906-0bf7-4167-9c11-7d2468761de9","Type":"ContainerDied","Data":"5d70e3147c9ccd684ef9de160dae4bfc7946624825106837e68894a2ba3f6d5a"} Jan 27 18:29:19 crc kubenswrapper[5049]: I0127 18:29:19.649573 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:29:19 crc kubenswrapper[5049]: E0127 18:29:19.650309 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:29:19 crc kubenswrapper[5049]: I0127 18:29:19.983430 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.077712 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g78k\" (UniqueName: \"kubernetes.io/projected/7085d906-0bf7-4167-9c11-7d2468761de9-kube-api-access-4g78k\") pod \"7085d906-0bf7-4167-9c11-7d2468761de9\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.077825 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-combined-ca-bundle\") pod \"7085d906-0bf7-4167-9c11-7d2468761de9\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.077901 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-config\") pod \"7085d906-0bf7-4167-9c11-7d2468761de9\" (UID: \"7085d906-0bf7-4167-9c11-7d2468761de9\") " Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.084865 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7085d906-0bf7-4167-9c11-7d2468761de9-kube-api-access-4g78k" (OuterVolumeSpecName: "kube-api-access-4g78k") pod "7085d906-0bf7-4167-9c11-7d2468761de9" (UID: "7085d906-0bf7-4167-9c11-7d2468761de9"). InnerVolumeSpecName "kube-api-access-4g78k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.101827 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-config" (OuterVolumeSpecName: "config") pod "7085d906-0bf7-4167-9c11-7d2468761de9" (UID: "7085d906-0bf7-4167-9c11-7d2468761de9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.103387 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7085d906-0bf7-4167-9c11-7d2468761de9" (UID: "7085d906-0bf7-4167-9c11-7d2468761de9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.180002 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g78k\" (UniqueName: \"kubernetes.io/projected/7085d906-0bf7-4167-9c11-7d2468761de9-kube-api-access-4g78k\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.180043 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.180064 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7085d906-0bf7-4167-9c11-7d2468761de9-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.598467 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hjzb7" event={"ID":"7085d906-0bf7-4167-9c11-7d2468761de9","Type":"ContainerDied","Data":"348c7fbed60c0e5635d663c92c362042c40bc0681bd8d58d5ca9256809ae7bbc"} Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.598510 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="348c7fbed60c0e5635d663c92c362042c40bc0681bd8d58d5ca9256809ae7bbc" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.598798 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hjzb7" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.744873 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb55df6c7-smd5h"] Jan 27 18:29:20 crc kubenswrapper[5049]: E0127 18:29:20.746405 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7085d906-0bf7-4167-9c11-7d2468761de9" containerName="neutron-db-sync" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.746487 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7085d906-0bf7-4167-9c11-7d2468761de9" containerName="neutron-db-sync" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.746786 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7085d906-0bf7-4167-9c11-7d2468761de9" containerName="neutron-db-sync" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.747829 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.761275 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb55df6c7-smd5h"] Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.839124 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d4c98d6f7-xq4dq"] Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.842660 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.845204 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xrjg2" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.845685 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.845934 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.859875 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d4c98d6f7-xq4dq"] Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.897545 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-config\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.897602 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-dns-svc\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.897939 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.898064 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:20 crc kubenswrapper[5049]: I0127 18:29:20.898108 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm7g5\" (UniqueName: \"kubernetes.io/projected/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-kube-api-access-sm7g5\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.002604 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.000647 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.003173 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm7g5\" (UniqueName: \"kubernetes.io/projected/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-kube-api-access-sm7g5\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.003694 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-config\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.003852 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-dns-svc\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.004044 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-config\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.004258 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-combined-ca-bundle\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.004374 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-httpd-config\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.004599 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.004742 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvvxt\" (UniqueName: \"kubernetes.io/projected/78274e9b-3cec-41ef-aed1-92296bc999a6-kube-api-access-jvvxt\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.004598 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-config\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.004639 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-dns-svc\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.005259 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.034344 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm7g5\" (UniqueName: \"kubernetes.io/projected/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-kube-api-access-sm7g5\") pod \"dnsmasq-dns-6cb55df6c7-smd5h\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.075817 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.106541 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-config\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.107324 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-combined-ca-bundle\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.107427 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-httpd-config\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.107553 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvvxt\" (UniqueName: \"kubernetes.io/projected/78274e9b-3cec-41ef-aed1-92296bc999a6-kube-api-access-jvvxt\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.113610 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-combined-ca-bundle\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.114929 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-config\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.114944 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/78274e9b-3cec-41ef-aed1-92296bc999a6-httpd-config\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.126186 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvvxt\" (UniqueName: \"kubernetes.io/projected/78274e9b-3cec-41ef-aed1-92296bc999a6-kube-api-access-jvvxt\") pod \"neutron-7d4c98d6f7-xq4dq\" (UID: \"78274e9b-3cec-41ef-aed1-92296bc999a6\") " pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.201722 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.554787 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb55df6c7-smd5h"] Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.608235 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" event={"ID":"62db2bb1-44d9-4bf1-824b-44aac8bde3f3","Type":"ContainerStarted","Data":"cc5d600e851de4cd1342ad56d929b2f501de171c0e77950c2e3e22f728bddf9f"} Jan 27 18:29:21 crc kubenswrapper[5049]: I0127 18:29:21.806943 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d4c98d6f7-xq4dq"] Jan 27 18:29:21 crc kubenswrapper[5049]: W0127 18:29:21.818084 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78274e9b_3cec_41ef_aed1_92296bc999a6.slice/crio-468ee22c62f6576e938c3d7f2dcbb915db4f37b0f1ab6bb050fbdcdc07ccc89e WatchSource:0}: Error finding container 468ee22c62f6576e938c3d7f2dcbb915db4f37b0f1ab6bb050fbdcdc07ccc89e: Status 404 returned error can't find the container with id 468ee22c62f6576e938c3d7f2dcbb915db4f37b0f1ab6bb050fbdcdc07ccc89e Jan 27 18:29:22 crc kubenswrapper[5049]: I0127 18:29:22.624317 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d4c98d6f7-xq4dq" event={"ID":"78274e9b-3cec-41ef-aed1-92296bc999a6","Type":"ContainerStarted","Data":"7b06aec91be8778b717ac496df3791d5c039c5bb34579e08336483ee32b4b630"} Jan 27 18:29:22 crc kubenswrapper[5049]: I0127 18:29:22.624370 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d4c98d6f7-xq4dq" event={"ID":"78274e9b-3cec-41ef-aed1-92296bc999a6","Type":"ContainerStarted","Data":"6593f0cb3dce50556f42cc7fb47e4d2dee938b9e7fac411721a9ee02ca1212b3"} Jan 27 18:29:22 crc kubenswrapper[5049]: I0127 18:29:22.624384 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d4c98d6f7-xq4dq" event={"ID":"78274e9b-3cec-41ef-aed1-92296bc999a6","Type":"ContainerStarted","Data":"468ee22c62f6576e938c3d7f2dcbb915db4f37b0f1ab6bb050fbdcdc07ccc89e"} Jan 27 18:29:22 crc kubenswrapper[5049]: I0127 18:29:22.624427 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:22 crc kubenswrapper[5049]: I0127 18:29:22.631835 5049 generic.go:334] "Generic (PLEG): container finished" podID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" containerID="7c305bc81b71d34c7eb355bec05b8374d030d73e43d8cd93b302adb2696c05cc" exitCode=0 Jan 27 18:29:22 crc kubenswrapper[5049]: I0127 18:29:22.632248 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" event={"ID":"62db2bb1-44d9-4bf1-824b-44aac8bde3f3","Type":"ContainerDied","Data":"7c305bc81b71d34c7eb355bec05b8374d030d73e43d8cd93b302adb2696c05cc"} Jan 27 18:29:22 crc kubenswrapper[5049]: I0127 18:29:22.702343 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d4c98d6f7-xq4dq" podStartSLOduration=2.702324216 podStartE2EDuration="2.702324216s" podCreationTimestamp="2026-01-27 18:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:22.655208595 +0000 UTC m=+5537.754182144" watchObservedRunningTime="2026-01-27 18:29:22.702324216 +0000 UTC m=+5537.801297765" Jan 27 18:29:23 crc kubenswrapper[5049]: I0127 18:29:23.673289 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" event={"ID":"62db2bb1-44d9-4bf1-824b-44aac8bde3f3","Type":"ContainerStarted","Data":"7244a6924043f791bcc2e3da6afdb23be31a5a4f462f8559adc4934ef924c40d"} Jan 27 18:29:23 crc kubenswrapper[5049]: I0127 18:29:23.681380 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" podStartSLOduration=3.681364527 podStartE2EDuration="3.681364527s" podCreationTimestamp="2026-01-27 18:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:29:23.672783005 +0000 UTC m=+5538.771756554" watchObservedRunningTime="2026-01-27 18:29:23.681364527 +0000 UTC m=+5538.780338076" Jan 27 18:29:24 crc kubenswrapper[5049]: I0127 18:29:24.662975 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:30 crc kubenswrapper[5049]: I0127 18:29:30.646236 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:29:30 crc kubenswrapper[5049]: E0127 18:29:30.647444 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.080885 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.157683 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67844db599-2245g"] Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.157928 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67844db599-2245g" podUID="39e3eb69-6b02-4fc4-9015-012bca2924b8" containerName="dnsmasq-dns" containerID="cri-o://5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc" gracePeriod=10 Jan 27 18:29:31 crc kubenswrapper[5049]: E0127 18:29:31.326150 5049 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39e3eb69_6b02_4fc4_9015_012bca2924b8.slice/crio-conmon-5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39e3eb69_6b02_4fc4_9015_012bca2924b8.slice/crio-5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.654530 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.725225 5049 generic.go:334] "Generic (PLEG): container finished" podID="39e3eb69-6b02-4fc4-9015-012bca2924b8" containerID="5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc" exitCode=0 Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.725279 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67844db599-2245g" event={"ID":"39e3eb69-6b02-4fc4-9015-012bca2924b8","Type":"ContainerDied","Data":"5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc"} Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.725308 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67844db599-2245g" event={"ID":"39e3eb69-6b02-4fc4-9015-012bca2924b8","Type":"ContainerDied","Data":"232c3d1075a14783b00819128e588601641fcfa8c9a26c5cccb3673964b75865"} Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.725326 5049 scope.go:117] "RemoveContainer" containerID="5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.725469 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67844db599-2245g" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.757328 5049 scope.go:117] "RemoveContainer" containerID="d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.783787 5049 scope.go:117] "RemoveContainer" containerID="5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc" Jan 27 18:29:31 crc kubenswrapper[5049]: E0127 18:29:31.784317 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc\": container with ID starting with 5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc not found: ID does not exist" containerID="5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.784344 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc"} err="failed to get container status \"5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc\": rpc error: code = NotFound desc = could not find container \"5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc\": container with ID starting with 5cfb734c741c41275ad8cb4d3f3ca112f9bd76da89c0d324a4f33247331f71dc not found: ID does not exist" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.784362 5049 scope.go:117] "RemoveContainer" containerID="d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383" Jan 27 18:29:31 crc kubenswrapper[5049]: E0127 18:29:31.784639 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383\": container with ID starting with d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383 not found: ID does not exist" containerID="d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.784661 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383"} err="failed to get container status \"d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383\": rpc error: code = NotFound desc = could not find container \"d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383\": container with ID starting with d654e17cbda8f8fad212636913d3c4b6d3ec08816d49fc1c049b690e9e933383 not found: ID does not exist" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.816272 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-config\") pod \"39e3eb69-6b02-4fc4-9015-012bca2924b8\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.816737 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-sb\") pod \"39e3eb69-6b02-4fc4-9015-012bca2924b8\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.816789 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m62hn\" (UniqueName: \"kubernetes.io/projected/39e3eb69-6b02-4fc4-9015-012bca2924b8-kube-api-access-m62hn\") pod \"39e3eb69-6b02-4fc4-9015-012bca2924b8\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.816832 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-nb\") pod \"39e3eb69-6b02-4fc4-9015-012bca2924b8\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.816884 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-dns-svc\") pod \"39e3eb69-6b02-4fc4-9015-012bca2924b8\" (UID: \"39e3eb69-6b02-4fc4-9015-012bca2924b8\") " Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.825249 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e3eb69-6b02-4fc4-9015-012bca2924b8-kube-api-access-m62hn" (OuterVolumeSpecName: "kube-api-access-m62hn") pod "39e3eb69-6b02-4fc4-9015-012bca2924b8" (UID: "39e3eb69-6b02-4fc4-9015-012bca2924b8"). InnerVolumeSpecName "kube-api-access-m62hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.862760 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-config" (OuterVolumeSpecName: "config") pod "39e3eb69-6b02-4fc4-9015-012bca2924b8" (UID: "39e3eb69-6b02-4fc4-9015-012bca2924b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.864300 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39e3eb69-6b02-4fc4-9015-012bca2924b8" (UID: "39e3eb69-6b02-4fc4-9015-012bca2924b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.866067 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39e3eb69-6b02-4fc4-9015-012bca2924b8" (UID: "39e3eb69-6b02-4fc4-9015-012bca2924b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.868319 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39e3eb69-6b02-4fc4-9015-012bca2924b8" (UID: "39e3eb69-6b02-4fc4-9015-012bca2924b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.918773 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m62hn\" (UniqueName: \"kubernetes.io/projected/39e3eb69-6b02-4fc4-9015-012bca2924b8-kube-api-access-m62hn\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.918807 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.918823 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.918834 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:31 crc kubenswrapper[5049]: I0127 18:29:31.918844 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39e3eb69-6b02-4fc4-9015-012bca2924b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:29:32 crc kubenswrapper[5049]: I0127 18:29:32.067351 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67844db599-2245g"] Jan 27 18:29:32 crc kubenswrapper[5049]: I0127 18:29:32.075184 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67844db599-2245g"] Jan 27 18:29:33 crc kubenswrapper[5049]: I0127 18:29:33.656865 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e3eb69-6b02-4fc4-9015-012bca2924b8" path="/var/lib/kubelet/pods/39e3eb69-6b02-4fc4-9015-012bca2924b8/volumes" Jan 27 18:29:43 crc kubenswrapper[5049]: I0127 18:29:43.646949 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:29:43 crc kubenswrapper[5049]: E0127 18:29:43.648179 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:29:51 crc kubenswrapper[5049]: I0127 18:29:51.210071 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d4c98d6f7-xq4dq" Jan 27 18:29:55 crc kubenswrapper[5049]: I0127 18:29:55.651126 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:29:55 crc kubenswrapper[5049]: E0127 18:29:55.651960 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.485791 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-bsb4r"] Jan 27 18:29:58 crc kubenswrapper[5049]: E0127 18:29:58.486532 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e3eb69-6b02-4fc4-9015-012bca2924b8" containerName="dnsmasq-dns" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.486550 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e3eb69-6b02-4fc4-9015-012bca2924b8" containerName="dnsmasq-dns" Jan 27 18:29:58 crc kubenswrapper[5049]: E0127 18:29:58.486572 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e3eb69-6b02-4fc4-9015-012bca2924b8" containerName="init" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.486580 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e3eb69-6b02-4fc4-9015-012bca2924b8" containerName="init" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.486825 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e3eb69-6b02-4fc4-9015-012bca2924b8" containerName="dnsmasq-dns" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.487630 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bsb4r" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.515998 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bsb4r"] Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.582560 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b1f2-account-create-update-6nmwv"] Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.583778 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.590216 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.592379 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b1f2-account-create-update-6nmwv"] Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.595071 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kph\" (UniqueName: \"kubernetes.io/projected/217ee884-d205-47b6-8b5a-38b054f72ca8-kube-api-access-l2kph\") pod \"glance-db-create-bsb4r\" (UID: \"217ee884-d205-47b6-8b5a-38b054f72ca8\") " pod="openstack/glance-db-create-bsb4r" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.595191 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217ee884-d205-47b6-8b5a-38b054f72ca8-operator-scripts\") pod \"glance-db-create-bsb4r\" (UID: \"217ee884-d205-47b6-8b5a-38b054f72ca8\") " pod="openstack/glance-db-create-bsb4r" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.698408 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217ee884-d205-47b6-8b5a-38b054f72ca8-operator-scripts\") pod \"glance-db-create-bsb4r\" (UID: \"217ee884-d205-47b6-8b5a-38b054f72ca8\") " pod="openstack/glance-db-create-bsb4r" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.698582 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90a52d8-4aaf-476e-b92e-0b6b4894c134-operator-scripts\") pod \"glance-b1f2-account-create-update-6nmwv\" (UID: \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\") " pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.698721 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vg2x\" (UniqueName: \"kubernetes.io/projected/d90a52d8-4aaf-476e-b92e-0b6b4894c134-kube-api-access-9vg2x\") pod \"glance-b1f2-account-create-update-6nmwv\" (UID: \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\") " pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.698791 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kph\" (UniqueName: \"kubernetes.io/projected/217ee884-d205-47b6-8b5a-38b054f72ca8-kube-api-access-l2kph\") pod \"glance-db-create-bsb4r\" (UID: \"217ee884-d205-47b6-8b5a-38b054f72ca8\") " pod="openstack/glance-db-create-bsb4r" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.700362 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217ee884-d205-47b6-8b5a-38b054f72ca8-operator-scripts\") pod \"glance-db-create-bsb4r\" (UID: \"217ee884-d205-47b6-8b5a-38b054f72ca8\") " pod="openstack/glance-db-create-bsb4r" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.718676 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kph\" (UniqueName: \"kubernetes.io/projected/217ee884-d205-47b6-8b5a-38b054f72ca8-kube-api-access-l2kph\") pod \"glance-db-create-bsb4r\" (UID: \"217ee884-d205-47b6-8b5a-38b054f72ca8\") " pod="openstack/glance-db-create-bsb4r" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.799764 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vg2x\" (UniqueName: \"kubernetes.io/projected/d90a52d8-4aaf-476e-b92e-0b6b4894c134-kube-api-access-9vg2x\") pod \"glance-b1f2-account-create-update-6nmwv\" (UID: \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\") " pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.800299 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90a52d8-4aaf-476e-b92e-0b6b4894c134-operator-scripts\") pod \"glance-b1f2-account-create-update-6nmwv\" (UID: \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\") " pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.800976 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90a52d8-4aaf-476e-b92e-0b6b4894c134-operator-scripts\") pod \"glance-b1f2-account-create-update-6nmwv\" (UID: \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\") " pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.810634 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bsb4r" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.819113 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vg2x\" (UniqueName: \"kubernetes.io/projected/d90a52d8-4aaf-476e-b92e-0b6b4894c134-kube-api-access-9vg2x\") pod \"glance-b1f2-account-create-update-6nmwv\" (UID: \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\") " pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:29:58 crc kubenswrapper[5049]: I0127 18:29:58.902862 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:29:59 crc kubenswrapper[5049]: I0127 18:29:59.265597 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-bsb4r"] Jan 27 18:29:59 crc kubenswrapper[5049]: I0127 18:29:59.410620 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b1f2-account-create-update-6nmwv"] Jan 27 18:29:59 crc kubenswrapper[5049]: W0127 18:29:59.416501 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd90a52d8_4aaf_476e_b92e_0b6b4894c134.slice/crio-294ff41a285c7f0881748c37ab2d635f2203bb7c4f12fb4708d509af0c3a312a WatchSource:0}: Error finding container 294ff41a285c7f0881748c37ab2d635f2203bb7c4f12fb4708d509af0c3a312a: Status 404 returned error can't find the container with id 294ff41a285c7f0881748c37ab2d635f2203bb7c4f12fb4708d509af0c3a312a Jan 27 18:29:59 crc kubenswrapper[5049]: I0127 18:29:59.968726 5049 generic.go:334] "Generic (PLEG): container finished" podID="217ee884-d205-47b6-8b5a-38b054f72ca8" containerID="2302b002e8ea70a4222015ba8659ebd92c541aee919550efe97d842b58bd8277" exitCode=0 Jan 27 18:29:59 crc kubenswrapper[5049]: I0127 18:29:59.968899 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bsb4r" event={"ID":"217ee884-d205-47b6-8b5a-38b054f72ca8","Type":"ContainerDied","Data":"2302b002e8ea70a4222015ba8659ebd92c541aee919550efe97d842b58bd8277"} Jan 27 18:29:59 crc kubenswrapper[5049]: I0127 18:29:59.969332 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bsb4r" event={"ID":"217ee884-d205-47b6-8b5a-38b054f72ca8","Type":"ContainerStarted","Data":"a86761bdec1363e46ff62f659e782503a5a47842b236c6ff0eff8a7d8b9c3387"} Jan 27 18:29:59 crc kubenswrapper[5049]: I0127 18:29:59.971378 5049 generic.go:334] "Generic (PLEG): container finished" podID="d90a52d8-4aaf-476e-b92e-0b6b4894c134" containerID="e30ac110a09b18827a285eedcdfb3cec0c19976dd3895af7a20d798b76b73415" exitCode=0 Jan 27 18:29:59 crc kubenswrapper[5049]: I0127 18:29:59.971421 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b1f2-account-create-update-6nmwv" event={"ID":"d90a52d8-4aaf-476e-b92e-0b6b4894c134","Type":"ContainerDied","Data":"e30ac110a09b18827a285eedcdfb3cec0c19976dd3895af7a20d798b76b73415"} Jan 27 18:29:59 crc kubenswrapper[5049]: I0127 18:29:59.971449 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b1f2-account-create-update-6nmwv" event={"ID":"d90a52d8-4aaf-476e-b92e-0b6b4894c134","Type":"ContainerStarted","Data":"294ff41a285c7f0881748c37ab2d635f2203bb7c4f12fb4708d509af0c3a312a"} Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.147210 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk"] Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.148549 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.151751 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.155298 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.212450 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk"] Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.328616 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e100adee-5dff-47a6-92cf-63eee7ba45b5-config-volume\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.328719 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e100adee-5dff-47a6-92cf-63eee7ba45b5-secret-volume\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.328954 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmcr\" (UniqueName: \"kubernetes.io/projected/e100adee-5dff-47a6-92cf-63eee7ba45b5-kube-api-access-8tmcr\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.430106 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmcr\" (UniqueName: \"kubernetes.io/projected/e100adee-5dff-47a6-92cf-63eee7ba45b5-kube-api-access-8tmcr\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.430200 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e100adee-5dff-47a6-92cf-63eee7ba45b5-config-volume\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.430229 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e100adee-5dff-47a6-92cf-63eee7ba45b5-secret-volume\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.432533 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e100adee-5dff-47a6-92cf-63eee7ba45b5-config-volume\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.448595 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e100adee-5dff-47a6-92cf-63eee7ba45b5-secret-volume\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.451647 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmcr\" (UniqueName: \"kubernetes.io/projected/e100adee-5dff-47a6-92cf-63eee7ba45b5-kube-api-access-8tmcr\") pod \"collect-profiles-29492310-287vk\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.464733 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.885961 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk"] Jan 27 18:30:00 crc kubenswrapper[5049]: W0127 18:30:00.894933 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode100adee_5dff_47a6_92cf_63eee7ba45b5.slice/crio-2720784f8f554db54ce616fc720e025ed13a2b9ea0162518b5c6f3eaf844513e WatchSource:0}: Error finding container 2720784f8f554db54ce616fc720e025ed13a2b9ea0162518b5c6f3eaf844513e: Status 404 returned error can't find the container with id 2720784f8f554db54ce616fc720e025ed13a2b9ea0162518b5c6f3eaf844513e Jan 27 18:30:00 crc kubenswrapper[5049]: I0127 18:30:00.995346 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" event={"ID":"e100adee-5dff-47a6-92cf-63eee7ba45b5","Type":"ContainerStarted","Data":"2720784f8f554db54ce616fc720e025ed13a2b9ea0162518b5c6f3eaf844513e"} Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.410752 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bsb4r" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.421199 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.561036 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217ee884-d205-47b6-8b5a-38b054f72ca8-operator-scripts\") pod \"217ee884-d205-47b6-8b5a-38b054f72ca8\" (UID: \"217ee884-d205-47b6-8b5a-38b054f72ca8\") " Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.561132 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vg2x\" (UniqueName: \"kubernetes.io/projected/d90a52d8-4aaf-476e-b92e-0b6b4894c134-kube-api-access-9vg2x\") pod \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\" (UID: \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\") " Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.561197 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90a52d8-4aaf-476e-b92e-0b6b4894c134-operator-scripts\") pod \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\" (UID: \"d90a52d8-4aaf-476e-b92e-0b6b4894c134\") " Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.561307 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kph\" (UniqueName: \"kubernetes.io/projected/217ee884-d205-47b6-8b5a-38b054f72ca8-kube-api-access-l2kph\") pod \"217ee884-d205-47b6-8b5a-38b054f72ca8\" (UID: \"217ee884-d205-47b6-8b5a-38b054f72ca8\") " Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.562142 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d90a52d8-4aaf-476e-b92e-0b6b4894c134-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d90a52d8-4aaf-476e-b92e-0b6b4894c134" (UID: "d90a52d8-4aaf-476e-b92e-0b6b4894c134"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.562714 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217ee884-d205-47b6-8b5a-38b054f72ca8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "217ee884-d205-47b6-8b5a-38b054f72ca8" (UID: "217ee884-d205-47b6-8b5a-38b054f72ca8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.574101 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90a52d8-4aaf-476e-b92e-0b6b4894c134-kube-api-access-9vg2x" (OuterVolumeSpecName: "kube-api-access-9vg2x") pod "d90a52d8-4aaf-476e-b92e-0b6b4894c134" (UID: "d90a52d8-4aaf-476e-b92e-0b6b4894c134"). InnerVolumeSpecName "kube-api-access-9vg2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.574161 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217ee884-d205-47b6-8b5a-38b054f72ca8-kube-api-access-l2kph" (OuterVolumeSpecName: "kube-api-access-l2kph") pod "217ee884-d205-47b6-8b5a-38b054f72ca8" (UID: "217ee884-d205-47b6-8b5a-38b054f72ca8"). InnerVolumeSpecName "kube-api-access-l2kph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.662897 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90a52d8-4aaf-476e-b92e-0b6b4894c134-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.662927 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kph\" (UniqueName: \"kubernetes.io/projected/217ee884-d205-47b6-8b5a-38b054f72ca8-kube-api-access-l2kph\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.662939 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/217ee884-d205-47b6-8b5a-38b054f72ca8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:01 crc kubenswrapper[5049]: I0127 18:30:01.662947 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vg2x\" (UniqueName: \"kubernetes.io/projected/d90a52d8-4aaf-476e-b92e-0b6b4894c134-kube-api-access-9vg2x\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:02 crc kubenswrapper[5049]: I0127 18:30:02.005613 5049 generic.go:334] "Generic (PLEG): container finished" podID="e100adee-5dff-47a6-92cf-63eee7ba45b5" containerID="d6accd3cc8c4d9e8d878f385641015d13a6a7d327de19d89cdd070c4c6688b00" exitCode=0 Jan 27 18:30:02 crc kubenswrapper[5049]: I0127 18:30:02.005991 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" event={"ID":"e100adee-5dff-47a6-92cf-63eee7ba45b5","Type":"ContainerDied","Data":"d6accd3cc8c4d9e8d878f385641015d13a6a7d327de19d89cdd070c4c6688b00"} Jan 27 18:30:02 crc kubenswrapper[5049]: I0127 18:30:02.007837 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-bsb4r" event={"ID":"217ee884-d205-47b6-8b5a-38b054f72ca8","Type":"ContainerDied","Data":"a86761bdec1363e46ff62f659e782503a5a47842b236c6ff0eff8a7d8b9c3387"} Jan 27 18:30:02 crc kubenswrapper[5049]: I0127 18:30:02.007867 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a86761bdec1363e46ff62f659e782503a5a47842b236c6ff0eff8a7d8b9c3387" Jan 27 18:30:02 crc kubenswrapper[5049]: I0127 18:30:02.007916 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-bsb4r" Jan 27 18:30:02 crc kubenswrapper[5049]: I0127 18:30:02.014457 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b1f2-account-create-update-6nmwv" event={"ID":"d90a52d8-4aaf-476e-b92e-0b6b4894c134","Type":"ContainerDied","Data":"294ff41a285c7f0881748c37ab2d635f2203bb7c4f12fb4708d509af0c3a312a"} Jan 27 18:30:02 crc kubenswrapper[5049]: I0127 18:30:02.014499 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294ff41a285c7f0881748c37ab2d635f2203bb7c4f12fb4708d509af0c3a312a" Jan 27 18:30:02 crc kubenswrapper[5049]: I0127 18:30:02.014524 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b1f2-account-create-update-6nmwv" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.318889 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.493200 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tmcr\" (UniqueName: \"kubernetes.io/projected/e100adee-5dff-47a6-92cf-63eee7ba45b5-kube-api-access-8tmcr\") pod \"e100adee-5dff-47a6-92cf-63eee7ba45b5\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.493381 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e100adee-5dff-47a6-92cf-63eee7ba45b5-config-volume\") pod \"e100adee-5dff-47a6-92cf-63eee7ba45b5\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.493484 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e100adee-5dff-47a6-92cf-63eee7ba45b5-secret-volume\") pod \"e100adee-5dff-47a6-92cf-63eee7ba45b5\" (UID: \"e100adee-5dff-47a6-92cf-63eee7ba45b5\") " Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.494070 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e100adee-5dff-47a6-92cf-63eee7ba45b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "e100adee-5dff-47a6-92cf-63eee7ba45b5" (UID: "e100adee-5dff-47a6-92cf-63eee7ba45b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.499256 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e100adee-5dff-47a6-92cf-63eee7ba45b5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e100adee-5dff-47a6-92cf-63eee7ba45b5" (UID: "e100adee-5dff-47a6-92cf-63eee7ba45b5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.499657 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e100adee-5dff-47a6-92cf-63eee7ba45b5-kube-api-access-8tmcr" (OuterVolumeSpecName: "kube-api-access-8tmcr") pod "e100adee-5dff-47a6-92cf-63eee7ba45b5" (UID: "e100adee-5dff-47a6-92cf-63eee7ba45b5"). InnerVolumeSpecName "kube-api-access-8tmcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.595431 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e100adee-5dff-47a6-92cf-63eee7ba45b5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.595472 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e100adee-5dff-47a6-92cf-63eee7ba45b5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.595485 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tmcr\" (UniqueName: \"kubernetes.io/projected/e100adee-5dff-47a6-92cf-63eee7ba45b5-kube-api-access-8tmcr\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.831809 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gtjbs"] Jan 27 18:30:03 crc kubenswrapper[5049]: E0127 18:30:03.832188 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90a52d8-4aaf-476e-b92e-0b6b4894c134" containerName="mariadb-account-create-update" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.832205 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90a52d8-4aaf-476e-b92e-0b6b4894c134" containerName="mariadb-account-create-update" Jan 27 18:30:03 crc kubenswrapper[5049]: E0127 18:30:03.832217 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e100adee-5dff-47a6-92cf-63eee7ba45b5" containerName="collect-profiles" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.832224 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e100adee-5dff-47a6-92cf-63eee7ba45b5" containerName="collect-profiles" Jan 27 18:30:03 crc kubenswrapper[5049]: E0127 18:30:03.832238 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217ee884-d205-47b6-8b5a-38b054f72ca8" containerName="mariadb-database-create" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.832244 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="217ee884-d205-47b6-8b5a-38b054f72ca8" containerName="mariadb-database-create" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.832413 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e100adee-5dff-47a6-92cf-63eee7ba45b5" containerName="collect-profiles" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.832423 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90a52d8-4aaf-476e-b92e-0b6b4894c134" containerName="mariadb-account-create-update" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.832439 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="217ee884-d205-47b6-8b5a-38b054f72ca8" containerName="mariadb-database-create" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.833105 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.835265 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.835405 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fm2pw" Jan 27 18:30:03 crc kubenswrapper[5049]: I0127 18:30:03.843329 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gtjbs"] Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.001833 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-db-sync-config-data\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.002108 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl5nc\" (UniqueName: \"kubernetes.io/projected/16811b41-1b68-4fea-bff7-76feac374a7c-kube-api-access-bl5nc\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.002266 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-config-data\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.002340 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-combined-ca-bundle\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.033166 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" event={"ID":"e100adee-5dff-47a6-92cf-63eee7ba45b5","Type":"ContainerDied","Data":"2720784f8f554db54ce616fc720e025ed13a2b9ea0162518b5c6f3eaf844513e"} Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.033231 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2720784f8f554db54ce616fc720e025ed13a2b9ea0162518b5c6f3eaf844513e" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.033309 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.105001 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-db-sync-config-data\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.105151 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl5nc\" (UniqueName: \"kubernetes.io/projected/16811b41-1b68-4fea-bff7-76feac374a7c-kube-api-access-bl5nc\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.105245 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-config-data\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.105330 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-combined-ca-bundle\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.110302 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-config-data\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.110575 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-combined-ca-bundle\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.121565 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-db-sync-config-data\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.125268 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl5nc\" (UniqueName: \"kubernetes.io/projected/16811b41-1b68-4fea-bff7-76feac374a7c-kube-api-access-bl5nc\") pod \"glance-db-sync-gtjbs\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.149965 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.420990 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk"] Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.428901 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492265-4lvxk"] Jan 27 18:30:04 crc kubenswrapper[5049]: I0127 18:30:04.660387 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gtjbs"] Jan 27 18:30:04 crc kubenswrapper[5049]: W0127 18:30:04.669492 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16811b41_1b68_4fea_bff7_76feac374a7c.slice/crio-76a9445f3d7de8f34ca98468e5c1a8a88ccaa5b31106f4e29991f24ff211127b WatchSource:0}: Error finding container 76a9445f3d7de8f34ca98468e5c1a8a88ccaa5b31106f4e29991f24ff211127b: Status 404 returned error can't find the container with id 76a9445f3d7de8f34ca98468e5c1a8a88ccaa5b31106f4e29991f24ff211127b Jan 27 18:30:05 crc kubenswrapper[5049]: I0127 18:30:05.044343 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gtjbs" event={"ID":"16811b41-1b68-4fea-bff7-76feac374a7c","Type":"ContainerStarted","Data":"76a9445f3d7de8f34ca98468e5c1a8a88ccaa5b31106f4e29991f24ff211127b"} Jan 27 18:30:05 crc kubenswrapper[5049]: I0127 18:30:05.658165 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5af233b-c094-44b3-bcee-89cd3f34d4b9" path="/var/lib/kubelet/pods/e5af233b-c094-44b3-bcee-89cd3f34d4b9/volumes" Jan 27 18:30:06 crc kubenswrapper[5049]: I0127 18:30:06.055591 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gtjbs" event={"ID":"16811b41-1b68-4fea-bff7-76feac374a7c","Type":"ContainerStarted","Data":"b7485ce71ce1303384b8b16b3a9c38bef1e7b36cf626d29e13ef5523aa36d016"} Jan 27 18:30:06 crc kubenswrapper[5049]: I0127 18:30:06.072309 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gtjbs" podStartSLOduration=3.072290315 podStartE2EDuration="3.072290315s" podCreationTimestamp="2026-01-27 18:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:06.068841437 +0000 UTC m=+5581.167814986" watchObservedRunningTime="2026-01-27 18:30:06.072290315 +0000 UTC m=+5581.171263854" Jan 27 18:30:06 crc kubenswrapper[5049]: I0127 18:30:06.646487 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:30:06 crc kubenswrapper[5049]: E0127 18:30:06.646794 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:30:09 crc kubenswrapper[5049]: I0127 18:30:09.081891 5049 generic.go:334] "Generic (PLEG): container finished" podID="16811b41-1b68-4fea-bff7-76feac374a7c" containerID="b7485ce71ce1303384b8b16b3a9c38bef1e7b36cf626d29e13ef5523aa36d016" exitCode=0 Jan 27 18:30:09 crc kubenswrapper[5049]: I0127 18:30:09.082013 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gtjbs" event={"ID":"16811b41-1b68-4fea-bff7-76feac374a7c","Type":"ContainerDied","Data":"b7485ce71ce1303384b8b16b3a9c38bef1e7b36cf626d29e13ef5523aa36d016"} Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.527515 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.689419 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-db-sync-config-data\") pod \"16811b41-1b68-4fea-bff7-76feac374a7c\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.689605 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-combined-ca-bundle\") pod \"16811b41-1b68-4fea-bff7-76feac374a7c\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.689869 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-config-data\") pod \"16811b41-1b68-4fea-bff7-76feac374a7c\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.689976 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl5nc\" (UniqueName: \"kubernetes.io/projected/16811b41-1b68-4fea-bff7-76feac374a7c-kube-api-access-bl5nc\") pod \"16811b41-1b68-4fea-bff7-76feac374a7c\" (UID: \"16811b41-1b68-4fea-bff7-76feac374a7c\") " Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.708016 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "16811b41-1b68-4fea-bff7-76feac374a7c" (UID: "16811b41-1b68-4fea-bff7-76feac374a7c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.709864 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16811b41-1b68-4fea-bff7-76feac374a7c-kube-api-access-bl5nc" (OuterVolumeSpecName: "kube-api-access-bl5nc") pod "16811b41-1b68-4fea-bff7-76feac374a7c" (UID: "16811b41-1b68-4fea-bff7-76feac374a7c"). InnerVolumeSpecName "kube-api-access-bl5nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.716040 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16811b41-1b68-4fea-bff7-76feac374a7c" (UID: "16811b41-1b68-4fea-bff7-76feac374a7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.767216 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-config-data" (OuterVolumeSpecName: "config-data") pod "16811b41-1b68-4fea-bff7-76feac374a7c" (UID: "16811b41-1b68-4fea-bff7-76feac374a7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.792983 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.793032 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl5nc\" (UniqueName: \"kubernetes.io/projected/16811b41-1b68-4fea-bff7-76feac374a7c-kube-api-access-bl5nc\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.793047 5049 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:10 crc kubenswrapper[5049]: I0127 18:30:10.793057 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16811b41-1b68-4fea-bff7-76feac374a7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.121203 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gtjbs" event={"ID":"16811b41-1b68-4fea-bff7-76feac374a7c","Type":"ContainerDied","Data":"76a9445f3d7de8f34ca98468e5c1a8a88ccaa5b31106f4e29991f24ff211127b"} Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.121251 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76a9445f3d7de8f34ca98468e5c1a8a88ccaa5b31106f4e29991f24ff211127b" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.121280 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gtjbs" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.395075 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:11 crc kubenswrapper[5049]: E0127 18:30:11.395495 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16811b41-1b68-4fea-bff7-76feac374a7c" containerName="glance-db-sync" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.395513 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="16811b41-1b68-4fea-bff7-76feac374a7c" containerName="glance-db-sync" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.395702 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="16811b41-1b68-4fea-bff7-76feac374a7c" containerName="glance-db-sync" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.397028 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.399108 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.399356 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.399855 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fm2pw" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.400133 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.407001 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.407065 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.407136 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-logs\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.407177 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.407212 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28ffx\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-kube-api-access-28ffx\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.407261 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.407303 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-ceph\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.418337 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.508910 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-ceph\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.509190 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.509251 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.509340 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-logs\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.509390 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.509426 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28ffx\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-kube-api-access-28ffx\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.509459 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.509853 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-logs\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.509975 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.514761 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-scripts\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.515647 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-config-data\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.515709 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-ceph\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.515909 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.540921 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28ffx\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-kube-api-access-28ffx\") pod \"glance-default-external-api-0\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.552762 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d7646d5bc-qjlng"] Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.554637 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.566304 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d7646d5bc-qjlng"] Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.611068 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-sb\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.611117 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mnt\" (UniqueName: \"kubernetes.io/projected/9d7a32a7-3cdd-440b-bd73-080c07057dad-kube-api-access-m2mnt\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.611157 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-nb\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.611176 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-config\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.611192 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-dns-svc\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.636633 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.638411 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.641825 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.664366 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714373 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714449 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-logs\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714472 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714510 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddxqv\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-kube-api-access-ddxqv\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714559 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-sb\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714595 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mnt\" (UniqueName: \"kubernetes.io/projected/9d7a32a7-3cdd-440b-bd73-080c07057dad-kube-api-access-m2mnt\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714617 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714642 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714695 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-nb\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714717 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-config\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714737 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.714753 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-dns-svc\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.715597 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-dns-svc\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.716084 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-nb\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.716452 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-sb\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.716572 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.716999 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-config\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.740153 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mnt\" (UniqueName: \"kubernetes.io/projected/9d7a32a7-3cdd-440b-bd73-080c07057dad-kube-api-access-m2mnt\") pod \"dnsmasq-dns-6d7646d5bc-qjlng\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.816233 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.816374 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-logs\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.816401 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.816440 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddxqv\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-kube-api-access-ddxqv\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.816551 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.816584 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.816636 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.817195 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.817419 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-logs\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.821278 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.822219 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.822624 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.824861 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.837711 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddxqv\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-kube-api-access-ddxqv\") pod \"glance-default-internal-api-0\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:11 crc kubenswrapper[5049]: I0127 18:30:11.937359 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:12 crc kubenswrapper[5049]: I0127 18:30:12.014682 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:12 crc kubenswrapper[5049]: I0127 18:30:12.315163 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:12 crc kubenswrapper[5049]: W0127 18:30:12.416481 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d7a32a7_3cdd_440b_bd73_080c07057dad.slice/crio-7f05ddb2b6c1cad8fa16597adf8ef8882cbfe0e14250fd2d6b45a8b70deed565 WatchSource:0}: Error finding container 7f05ddb2b6c1cad8fa16597adf8ef8882cbfe0e14250fd2d6b45a8b70deed565: Status 404 returned error can't find the container with id 7f05ddb2b6c1cad8fa16597adf8ef8882cbfe0e14250fd2d6b45a8b70deed565 Jan 27 18:30:12 crc kubenswrapper[5049]: I0127 18:30:12.417592 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d7646d5bc-qjlng"] Jan 27 18:30:12 crc kubenswrapper[5049]: I0127 18:30:12.670069 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:12 crc kubenswrapper[5049]: I0127 18:30:12.677739 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:13 crc kubenswrapper[5049]: I0127 18:30:13.150782 5049 generic.go:334] "Generic (PLEG): container finished" podID="9d7a32a7-3cdd-440b-bd73-080c07057dad" containerID="8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32" exitCode=0 Jan 27 18:30:13 crc kubenswrapper[5049]: I0127 18:30:13.150870 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" event={"ID":"9d7a32a7-3cdd-440b-bd73-080c07057dad","Type":"ContainerDied","Data":"8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32"} Jan 27 18:30:13 crc kubenswrapper[5049]: I0127 18:30:13.150911 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" event={"ID":"9d7a32a7-3cdd-440b-bd73-080c07057dad","Type":"ContainerStarted","Data":"7f05ddb2b6c1cad8fa16597adf8ef8882cbfe0e14250fd2d6b45a8b70deed565"} Jan 27 18:30:13 crc kubenswrapper[5049]: I0127 18:30:13.152354 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89bba603-5b3c-447b-a8dc-b68bfcdfa53b","Type":"ContainerStarted","Data":"a7c844fecd178fd6e0d3673ef1f5012977b0cadd90fd03c697eb7e842a445596"} Jan 27 18:30:13 crc kubenswrapper[5049]: I0127 18:30:13.156970 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8","Type":"ContainerStarted","Data":"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9"} Jan 27 18:30:13 crc kubenswrapper[5049]: I0127 18:30:13.157024 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8","Type":"ContainerStarted","Data":"df341172827060430dfe140f0ace33cbdaed91362b1b92eadfe04edfa4f90650"} Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.167784 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89bba603-5b3c-447b-a8dc-b68bfcdfa53b","Type":"ContainerStarted","Data":"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9"} Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.171768 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89bba603-5b3c-447b-a8dc-b68bfcdfa53b","Type":"ContainerStarted","Data":"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270"} Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.172391 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8","Type":"ContainerStarted","Data":"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601"} Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.172658 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerName="glance-log" containerID="cri-o://76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9" gracePeriod=30 Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.172928 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerName="glance-httpd" containerID="cri-o://7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601" gracePeriod=30 Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.183828 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" event={"ID":"9d7a32a7-3cdd-440b-bd73-080c07057dad","Type":"ContainerStarted","Data":"75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a"} Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.184755 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.201754 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.201730676 podStartE2EDuration="3.201730676s" podCreationTimestamp="2026-01-27 18:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:14.192350791 +0000 UTC m=+5589.291324350" watchObservedRunningTime="2026-01-27 18:30:14.201730676 +0000 UTC m=+5589.300704225" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.219939 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.21991922 podStartE2EDuration="3.21991922s" podCreationTimestamp="2026-01-27 18:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:14.216211695 +0000 UTC m=+5589.315185244" watchObservedRunningTime="2026-01-27 18:30:14.21991922 +0000 UTC m=+5589.318892769" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.239492 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" podStartSLOduration=3.239469841 podStartE2EDuration="3.239469841s" podCreationTimestamp="2026-01-27 18:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:14.23232462 +0000 UTC m=+5589.331298179" watchObservedRunningTime="2026-01-27 18:30:14.239469841 +0000 UTC m=+5589.338443390" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.688775 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.857535 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.921809 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28ffx\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-kube-api-access-28ffx\") pod \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.921848 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-ceph\") pod \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.921883 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-scripts\") pod \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.921952 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-config-data\") pod \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.921986 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-combined-ca-bundle\") pod \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.922052 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-httpd-run\") pod \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.922111 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-logs\") pod \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\" (UID: \"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8\") " Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.922713 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-logs" (OuterVolumeSpecName: "logs") pod "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" (UID: "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.922862 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" (UID: "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.927335 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-ceph" (OuterVolumeSpecName: "ceph") pod "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" (UID: "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.935851 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-scripts" (OuterVolumeSpecName: "scripts") pod "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" (UID: "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.935980 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-kube-api-access-28ffx" (OuterVolumeSpecName: "kube-api-access-28ffx") pod "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" (UID: "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8"). InnerVolumeSpecName "kube-api-access-28ffx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.947190 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" (UID: "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:14 crc kubenswrapper[5049]: I0127 18:30:14.965252 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-config-data" (OuterVolumeSpecName: "config-data") pod "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" (UID: "e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.024020 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.024225 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28ffx\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-kube-api-access-28ffx\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.024288 5049 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-ceph\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.024966 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.025065 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.025150 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.025246 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.196283 5049 generic.go:334] "Generic (PLEG): container finished" podID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerID="7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601" exitCode=0 Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.197361 5049 generic.go:334] "Generic (PLEG): container finished" podID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerID="76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9" exitCode=143 Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.196694 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.196558 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8","Type":"ContainerDied","Data":"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601"} Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.198041 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8","Type":"ContainerDied","Data":"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9"} Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.198083 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8","Type":"ContainerDied","Data":"df341172827060430dfe140f0ace33cbdaed91362b1b92eadfe04edfa4f90650"} Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.198110 5049 scope.go:117] "RemoveContainer" containerID="7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.234539 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.235937 5049 scope.go:117] "RemoveContainer" containerID="76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.256291 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.265509 5049 scope.go:117] "RemoveContainer" containerID="7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601" Jan 27 18:30:15 crc kubenswrapper[5049]: E0127 18:30:15.266002 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601\": container with ID starting with 7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601 not found: ID does not exist" containerID="7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.266041 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601"} err="failed to get container status \"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601\": rpc error: code = NotFound desc = could not find container \"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601\": container with ID starting with 7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601 not found: ID does not exist" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.266065 5049 scope.go:117] "RemoveContainer" containerID="76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9" Jan 27 18:30:15 crc kubenswrapper[5049]: E0127 18:30:15.266362 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9\": container with ID starting with 76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9 not found: ID does not exist" containerID="76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.266402 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9"} err="failed to get container status \"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9\": rpc error: code = NotFound desc = could not find container \"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9\": container with ID starting with 76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9 not found: ID does not exist" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.266439 5049 scope.go:117] "RemoveContainer" containerID="7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.266719 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601"} err="failed to get container status \"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601\": rpc error: code = NotFound desc = could not find container \"7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601\": container with ID starting with 7b0ef56b30b190ea2279d81fcbf5c8b6218858fff78eeb33b0bdb12c704d2601 not found: ID does not exist" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.266744 5049 scope.go:117] "RemoveContainer" containerID="76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.266978 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9"} err="failed to get container status \"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9\": rpc error: code = NotFound desc = could not find container \"76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9\": container with ID starting with 76834d2d0247422ba2cbfbfbe9ee744c391c8f8a5da43f9d1622f7286f2f41a9 not found: ID does not exist" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.268721 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:15 crc kubenswrapper[5049]: E0127 18:30:15.269208 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerName="glance-httpd" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.269241 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerName="glance-httpd" Jan 27 18:30:15 crc kubenswrapper[5049]: E0127 18:30:15.269254 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerName="glance-log" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.269263 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerName="glance-log" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.269467 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerName="glance-log" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.269493 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" containerName="glance-httpd" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.270812 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.277115 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.277747 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.334078 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgggm\" (UniqueName: \"kubernetes.io/projected/47f141fd-e32a-4605-84c5-f5af36c92ad3-kube-api-access-rgggm\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.335346 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.335613 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47f141fd-e32a-4605-84c5-f5af36c92ad3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.335810 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-config-data\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.335885 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/47f141fd-e32a-4605-84c5-f5af36c92ad3-ceph\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.335920 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47f141fd-e32a-4605-84c5-f5af36c92ad3-logs\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.336017 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-scripts\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.437887 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-config-data\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.437947 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/47f141fd-e32a-4605-84c5-f5af36c92ad3-ceph\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.437969 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47f141fd-e32a-4605-84c5-f5af36c92ad3-logs\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.438004 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-scripts\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.438049 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgggm\" (UniqueName: \"kubernetes.io/projected/47f141fd-e32a-4605-84c5-f5af36c92ad3-kube-api-access-rgggm\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.438087 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.438132 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47f141fd-e32a-4605-84c5-f5af36c92ad3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.438615 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/47f141fd-e32a-4605-84c5-f5af36c92ad3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.439001 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47f141fd-e32a-4605-84c5-f5af36c92ad3-logs\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.443495 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-scripts\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.444353 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-config-data\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.445403 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f141fd-e32a-4605-84c5-f5af36c92ad3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.452461 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/47f141fd-e32a-4605-84c5-f5af36c92ad3-ceph\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.459235 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgggm\" (UniqueName: \"kubernetes.io/projected/47f141fd-e32a-4605-84c5-f5af36c92ad3-kube-api-access-rgggm\") pod \"glance-default-external-api-0\" (UID: \"47f141fd-e32a-4605-84c5-f5af36c92ad3\") " pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.605694 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 18:30:15 crc kubenswrapper[5049]: I0127 18:30:15.657005 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8" path="/var/lib/kubelet/pods/e3cd1590-ec13-429d-8fd5-d16bd8d2fcb8/volumes" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.131646 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.207395 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"47f141fd-e32a-4605-84c5-f5af36c92ad3","Type":"ContainerStarted","Data":"eaf675c3725e52cd5b8590030c0ae095ca8c336fce301f073657680cb4ed2a9e"} Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.207583 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerName="glance-log" containerID="cri-o://d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270" gracePeriod=30 Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.207642 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerName="glance-httpd" containerID="cri-o://57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9" gracePeriod=30 Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.681713 5049 scope.go:117] "RemoveContainer" containerID="b191a98e41e8bbe9dfc6c6ef5f0f2b2fd8f26966762c999c85267f147d960fdf" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.893347 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.961278 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-logs\") pod \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.961633 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-config-data\") pod \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.961662 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-scripts\") pod \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.961759 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-logs" (OuterVolumeSpecName: "logs") pod "89bba603-5b3c-447b-a8dc-b68bfcdfa53b" (UID: "89bba603-5b3c-447b-a8dc-b68bfcdfa53b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.961794 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-combined-ca-bundle\") pod \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.961841 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddxqv\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-kube-api-access-ddxqv\") pod \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.961894 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-httpd-run\") pod \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.961911 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-ceph\") pod \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\" (UID: \"89bba603-5b3c-447b-a8dc-b68bfcdfa53b\") " Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.962310 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.963010 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "89bba603-5b3c-447b-a8dc-b68bfcdfa53b" (UID: "89bba603-5b3c-447b-a8dc-b68bfcdfa53b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.966409 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-ceph" (OuterVolumeSpecName: "ceph") pod "89bba603-5b3c-447b-a8dc-b68bfcdfa53b" (UID: "89bba603-5b3c-447b-a8dc-b68bfcdfa53b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.969084 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-scripts" (OuterVolumeSpecName: "scripts") pod "89bba603-5b3c-447b-a8dc-b68bfcdfa53b" (UID: "89bba603-5b3c-447b-a8dc-b68bfcdfa53b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.979405 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-kube-api-access-ddxqv" (OuterVolumeSpecName: "kube-api-access-ddxqv") pod "89bba603-5b3c-447b-a8dc-b68bfcdfa53b" (UID: "89bba603-5b3c-447b-a8dc-b68bfcdfa53b"). InnerVolumeSpecName "kube-api-access-ddxqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:16 crc kubenswrapper[5049]: I0127 18:30:16.998478 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89bba603-5b3c-447b-a8dc-b68bfcdfa53b" (UID: "89bba603-5b3c-447b-a8dc-b68bfcdfa53b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.010852 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-config-data" (OuterVolumeSpecName: "config-data") pod "89bba603-5b3c-447b-a8dc-b68bfcdfa53b" (UID: "89bba603-5b3c-447b-a8dc-b68bfcdfa53b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.063917 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.063959 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddxqv\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-kube-api-access-ddxqv\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.063972 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.063985 5049 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-ceph\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.063996 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.064006 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89bba603-5b3c-447b-a8dc-b68bfcdfa53b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.216209 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"47f141fd-e32a-4605-84c5-f5af36c92ad3","Type":"ContainerStarted","Data":"456ba276c09d2f7c1c955535d2276cee031acfcd544782cf96f921532fb0838d"} Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.216265 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"47f141fd-e32a-4605-84c5-f5af36c92ad3","Type":"ContainerStarted","Data":"70d2efb253d540f4e7e937807901b943db46eb2f09a5cca2b581493660698b7a"} Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.219113 5049 generic.go:334] "Generic (PLEG): container finished" podID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerID="57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9" exitCode=0 Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.219140 5049 generic.go:334] "Generic (PLEG): container finished" podID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerID="d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270" exitCode=143 Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.219164 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89bba603-5b3c-447b-a8dc-b68bfcdfa53b","Type":"ContainerDied","Data":"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9"} Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.219188 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89bba603-5b3c-447b-a8dc-b68bfcdfa53b","Type":"ContainerDied","Data":"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270"} Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.219200 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"89bba603-5b3c-447b-a8dc-b68bfcdfa53b","Type":"ContainerDied","Data":"a7c844fecd178fd6e0d3673ef1f5012977b0cadd90fd03c697eb7e842a445596"} Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.219217 5049 scope.go:117] "RemoveContainer" containerID="57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.219218 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.235066 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.235047117 podStartE2EDuration="2.235047117s" podCreationTimestamp="2026-01-27 18:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:17.23339969 +0000 UTC m=+5592.332373239" watchObservedRunningTime="2026-01-27 18:30:17.235047117 +0000 UTC m=+5592.334020666" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.240176 5049 scope.go:117] "RemoveContainer" containerID="d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.263876 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.277686 5049 scope.go:117] "RemoveContainer" containerID="57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9" Jan 27 18:30:17 crc kubenswrapper[5049]: E0127 18:30:17.280145 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9\": container with ID starting with 57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9 not found: ID does not exist" containerID="57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.280189 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9"} err="failed to get container status \"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9\": rpc error: code = NotFound desc = could not find container \"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9\": container with ID starting with 57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9 not found: ID does not exist" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.280216 5049 scope.go:117] "RemoveContainer" containerID="d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270" Jan 27 18:30:17 crc kubenswrapper[5049]: E0127 18:30:17.280551 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270\": container with ID starting with d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270 not found: ID does not exist" containerID="d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.280652 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270"} err="failed to get container status \"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270\": rpc error: code = NotFound desc = could not find container \"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270\": container with ID starting with d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270 not found: ID does not exist" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.280771 5049 scope.go:117] "RemoveContainer" containerID="57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.281157 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9"} err="failed to get container status \"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9\": rpc error: code = NotFound desc = could not find container \"57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9\": container with ID starting with 57dbbde28040e9c1e8e2d7f92cf1425416416999dd0600b41e223d4f105e3cb9 not found: ID does not exist" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.281182 5049 scope.go:117] "RemoveContainer" containerID="d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.281407 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270"} err="failed to get container status \"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270\": rpc error: code = NotFound desc = could not find container \"d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270\": container with ID starting with d5fea7863207872de349950f6f207398627bdc6c18dd05bbe1a4f3fb73651270 not found: ID does not exist" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.281756 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.290850 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:17 crc kubenswrapper[5049]: E0127 18:30:17.291175 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerName="glance-httpd" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.291191 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerName="glance-httpd" Jan 27 18:30:17 crc kubenswrapper[5049]: E0127 18:30:17.291204 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerName="glance-log" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.291211 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerName="glance-log" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.291350 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerName="glance-log" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.291370 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" containerName="glance-httpd" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.292161 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.304834 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.305968 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.370965 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.371037 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qddqh\" (UniqueName: \"kubernetes.io/projected/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-kube-api-access-qddqh\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.371116 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.371284 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.371413 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.371615 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.371645 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.472845 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.473109 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.473216 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.473332 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qddqh\" (UniqueName: \"kubernetes.io/projected/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-kube-api-access-qddqh\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.473493 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.473615 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.473749 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.473938 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.474131 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.476726 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-ceph\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.477213 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.477548 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.480265 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.489765 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qddqh\" (UniqueName: \"kubernetes.io/projected/fa6051da-b458-4c9e-80ac-4a1a64bcb1ec-kube-api-access-qddqh\") pod \"glance-default-internal-api-0\" (UID: \"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec\") " pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.621410 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:17 crc kubenswrapper[5049]: I0127 18:30:17.659663 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89bba603-5b3c-447b-a8dc-b68bfcdfa53b" path="/var/lib/kubelet/pods/89bba603-5b3c-447b-a8dc-b68bfcdfa53b/volumes" Jan 27 18:30:18 crc kubenswrapper[5049]: I0127 18:30:18.164347 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 18:30:18 crc kubenswrapper[5049]: I0127 18:30:18.229747 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec","Type":"ContainerStarted","Data":"339d164118f81bdc3fb230cb574e55226720e57fad18a3d5dcd3041c65f2da8b"} Jan 27 18:30:19 crc kubenswrapper[5049]: I0127 18:30:19.240052 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec","Type":"ContainerStarted","Data":"24ccf2bd8981be7c858df503c97de50cab35f3c3240eecd26b20b9fd1b58414f"} Jan 27 18:30:19 crc kubenswrapper[5049]: I0127 18:30:19.240306 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fa6051da-b458-4c9e-80ac-4a1a64bcb1ec","Type":"ContainerStarted","Data":"1e7ee12de869b23163a0b3fe0a15783bb82c870e871fe12ffb19440b93af7326"} Jan 27 18:30:19 crc kubenswrapper[5049]: I0127 18:30:19.267575 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.26754468 podStartE2EDuration="2.26754468s" podCreationTimestamp="2026-01-27 18:30:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:19.258775953 +0000 UTC m=+5594.357749502" watchObservedRunningTime="2026-01-27 18:30:19.26754468 +0000 UTC m=+5594.366518229" Jan 27 18:30:20 crc kubenswrapper[5049]: I0127 18:30:20.646557 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:30:21 crc kubenswrapper[5049]: I0127 18:30:21.260480 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"0f0c85dbb74448a363ed0be73b30a973046d8019ad413c4249fdaeac7a5b4439"} Jan 27 18:30:21 crc kubenswrapper[5049]: I0127 18:30:21.939893 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.009141 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb55df6c7-smd5h"] Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.009631 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" podUID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" containerName="dnsmasq-dns" containerID="cri-o://7244a6924043f791bcc2e3da6afdb23be31a5a4f462f8559adc4934ef924c40d" gracePeriod=10 Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.275806 5049 generic.go:334] "Generic (PLEG): container finished" podID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" containerID="7244a6924043f791bcc2e3da6afdb23be31a5a4f462f8559adc4934ef924c40d" exitCode=0 Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.275922 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" event={"ID":"62db2bb1-44d9-4bf1-824b-44aac8bde3f3","Type":"ContainerDied","Data":"7244a6924043f791bcc2e3da6afdb23be31a5a4f462f8559adc4934ef924c40d"} Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.494955 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.556547 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-config\") pod \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.556656 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-nb\") pod \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.556704 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-dns-svc\") pod \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.556755 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm7g5\" (UniqueName: \"kubernetes.io/projected/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-kube-api-access-sm7g5\") pod \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.556793 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-sb\") pod \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\" (UID: \"62db2bb1-44d9-4bf1-824b-44aac8bde3f3\") " Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.563172 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-kube-api-access-sm7g5" (OuterVolumeSpecName: "kube-api-access-sm7g5") pod "62db2bb1-44d9-4bf1-824b-44aac8bde3f3" (UID: "62db2bb1-44d9-4bf1-824b-44aac8bde3f3"). InnerVolumeSpecName "kube-api-access-sm7g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.606091 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62db2bb1-44d9-4bf1-824b-44aac8bde3f3" (UID: "62db2bb1-44d9-4bf1-824b-44aac8bde3f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.607905 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "62db2bb1-44d9-4bf1-824b-44aac8bde3f3" (UID: "62db2bb1-44d9-4bf1-824b-44aac8bde3f3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.618449 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-config" (OuterVolumeSpecName: "config") pod "62db2bb1-44d9-4bf1-824b-44aac8bde3f3" (UID: "62db2bb1-44d9-4bf1-824b-44aac8bde3f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.623415 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "62db2bb1-44d9-4bf1-824b-44aac8bde3f3" (UID: "62db2bb1-44d9-4bf1-824b-44aac8bde3f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.659566 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.659913 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.659927 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.659938 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm7g5\" (UniqueName: \"kubernetes.io/projected/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-kube-api-access-sm7g5\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:22 crc kubenswrapper[5049]: I0127 18:30:22.659948 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62db2bb1-44d9-4bf1-824b-44aac8bde3f3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:23 crc kubenswrapper[5049]: I0127 18:30:23.286660 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" event={"ID":"62db2bb1-44d9-4bf1-824b-44aac8bde3f3","Type":"ContainerDied","Data":"cc5d600e851de4cd1342ad56d929b2f501de171c0e77950c2e3e22f728bddf9f"} Jan 27 18:30:23 crc kubenswrapper[5049]: I0127 18:30:23.286783 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb55df6c7-smd5h" Jan 27 18:30:23 crc kubenswrapper[5049]: I0127 18:30:23.287013 5049 scope.go:117] "RemoveContainer" containerID="7244a6924043f791bcc2e3da6afdb23be31a5a4f462f8559adc4934ef924c40d" Jan 27 18:30:23 crc kubenswrapper[5049]: I0127 18:30:23.316055 5049 scope.go:117] "RemoveContainer" containerID="7c305bc81b71d34c7eb355bec05b8374d030d73e43d8cd93b302adb2696c05cc" Jan 27 18:30:23 crc kubenswrapper[5049]: I0127 18:30:23.319494 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb55df6c7-smd5h"] Jan 27 18:30:23 crc kubenswrapper[5049]: I0127 18:30:23.326666 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb55df6c7-smd5h"] Jan 27 18:30:23 crc kubenswrapper[5049]: I0127 18:30:23.661462 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" path="/var/lib/kubelet/pods/62db2bb1-44d9-4bf1-824b-44aac8bde3f3/volumes" Jan 27 18:30:25 crc kubenswrapper[5049]: I0127 18:30:25.606697 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:30:25 crc kubenswrapper[5049]: I0127 18:30:25.607110 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 18:30:25 crc kubenswrapper[5049]: I0127 18:30:25.643286 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:30:25 crc kubenswrapper[5049]: I0127 18:30:25.662646 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 18:30:26 crc kubenswrapper[5049]: I0127 18:30:26.322053 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:30:26 crc kubenswrapper[5049]: I0127 18:30:26.322538 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 18:30:27 crc kubenswrapper[5049]: I0127 18:30:27.622082 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:27 crc kubenswrapper[5049]: I0127 18:30:27.622645 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:27 crc kubenswrapper[5049]: I0127 18:30:27.664077 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:27 crc kubenswrapper[5049]: I0127 18:30:27.678054 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:28 crc kubenswrapper[5049]: I0127 18:30:28.348655 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:28 crc kubenswrapper[5049]: I0127 18:30:28.348715 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:28 crc kubenswrapper[5049]: I0127 18:30:28.550886 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:30:28 crc kubenswrapper[5049]: I0127 18:30:28.551031 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:30:28 crc kubenswrapper[5049]: I0127 18:30:28.553706 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 18:30:30 crc kubenswrapper[5049]: I0127 18:30:30.562155 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:30 crc kubenswrapper[5049]: I0127 18:30:30.563042 5049 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 18:30:30 crc kubenswrapper[5049]: I0127 18:30:30.731470 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.019866 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-k9ljd"] Jan 27 18:30:37 crc kubenswrapper[5049]: E0127 18:30:37.025565 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" containerName="dnsmasq-dns" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.025753 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" containerName="dnsmasq-dns" Jan 27 18:30:37 crc kubenswrapper[5049]: E0127 18:30:37.025875 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" containerName="init" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.026031 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" containerName="init" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.026433 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="62db2bb1-44d9-4bf1-824b-44aac8bde3f3" containerName="dnsmasq-dns" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.027665 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.031877 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k9ljd"] Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.114160 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6731-account-create-update-5jb56"] Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.115139 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.117496 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.131375 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6731-account-create-update-5jb56"] Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.145527 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6779\" (UniqueName: \"kubernetes.io/projected/af843ebd-0b01-45b3-9520-6d9375d9edee-kube-api-access-k6779\") pod \"placement-db-create-k9ljd\" (UID: \"af843ebd-0b01-45b3-9520-6d9375d9edee\") " pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.145578 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af843ebd-0b01-45b3-9520-6d9375d9edee-operator-scripts\") pod \"placement-db-create-k9ljd\" (UID: \"af843ebd-0b01-45b3-9520-6d9375d9edee\") " pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.247529 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-operator-scripts\") pod \"placement-6731-account-create-update-5jb56\" (UID: \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\") " pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.247616 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6779\" (UniqueName: \"kubernetes.io/projected/af843ebd-0b01-45b3-9520-6d9375d9edee-kube-api-access-k6779\") pod \"placement-db-create-k9ljd\" (UID: \"af843ebd-0b01-45b3-9520-6d9375d9edee\") " pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.247640 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af843ebd-0b01-45b3-9520-6d9375d9edee-operator-scripts\") pod \"placement-db-create-k9ljd\" (UID: \"af843ebd-0b01-45b3-9520-6d9375d9edee\") " pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.247793 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbzp4\" (UniqueName: \"kubernetes.io/projected/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-kube-api-access-zbzp4\") pod \"placement-6731-account-create-update-5jb56\" (UID: \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\") " pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.248710 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af843ebd-0b01-45b3-9520-6d9375d9edee-operator-scripts\") pod \"placement-db-create-k9ljd\" (UID: \"af843ebd-0b01-45b3-9520-6d9375d9edee\") " pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.267639 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6779\" (UniqueName: \"kubernetes.io/projected/af843ebd-0b01-45b3-9520-6d9375d9edee-kube-api-access-k6779\") pod \"placement-db-create-k9ljd\" (UID: \"af843ebd-0b01-45b3-9520-6d9375d9edee\") " pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.349178 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbzp4\" (UniqueName: \"kubernetes.io/projected/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-kube-api-access-zbzp4\") pod \"placement-6731-account-create-update-5jb56\" (UID: \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\") " pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.349640 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-operator-scripts\") pod \"placement-6731-account-create-update-5jb56\" (UID: \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\") " pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.350537 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-operator-scripts\") pod \"placement-6731-account-create-update-5jb56\" (UID: \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\") " pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.362619 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.367666 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbzp4\" (UniqueName: \"kubernetes.io/projected/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-kube-api-access-zbzp4\") pod \"placement-6731-account-create-update-5jb56\" (UID: \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\") " pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.432010 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.881844 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k9ljd"] Jan 27 18:30:37 crc kubenswrapper[5049]: I0127 18:30:37.970027 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6731-account-create-update-5jb56"] Jan 27 18:30:37 crc kubenswrapper[5049]: W0127 18:30:37.988731 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5fb317_bb87_451b_b0d3_696a9b32b1cb.slice/crio-1c0b4be60c4e2e24897aaa35f131840b5052a88718526ad8ef7d6f24525c0477 WatchSource:0}: Error finding container 1c0b4be60c4e2e24897aaa35f131840b5052a88718526ad8ef7d6f24525c0477: Status 404 returned error can't find the container with id 1c0b4be60c4e2e24897aaa35f131840b5052a88718526ad8ef7d6f24525c0477 Jan 27 18:30:38 crc kubenswrapper[5049]: I0127 18:30:38.440402 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k9ljd" event={"ID":"af843ebd-0b01-45b3-9520-6d9375d9edee","Type":"ContainerStarted","Data":"f55509a753c5725b4a1b08693bf525e4bfada980fe3a3250510c8a0ed11eece2"} Jan 27 18:30:38 crc kubenswrapper[5049]: I0127 18:30:38.440973 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k9ljd" event={"ID":"af843ebd-0b01-45b3-9520-6d9375d9edee","Type":"ContainerStarted","Data":"7335b118228463ae4c0bcc4e89085fce18f79d0d39699dd35ed2bbd2f0ac1b56"} Jan 27 18:30:38 crc kubenswrapper[5049]: I0127 18:30:38.442716 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6731-account-create-update-5jb56" event={"ID":"fb5fb317-bb87-451b-b0d3-696a9b32b1cb","Type":"ContainerStarted","Data":"0ca7b1bea2b2b7078eea98c9c9dce521fd838ea3cfe8a84e150c8ddc9d456419"} Jan 27 18:30:38 crc kubenswrapper[5049]: I0127 18:30:38.442774 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6731-account-create-update-5jb56" event={"ID":"fb5fb317-bb87-451b-b0d3-696a9b32b1cb","Type":"ContainerStarted","Data":"1c0b4be60c4e2e24897aaa35f131840b5052a88718526ad8ef7d6f24525c0477"} Jan 27 18:30:38 crc kubenswrapper[5049]: I0127 18:30:38.481570 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6731-account-create-update-5jb56" podStartSLOduration=1.481549137 podStartE2EDuration="1.481549137s" podCreationTimestamp="2026-01-27 18:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:38.479296883 +0000 UTC m=+5613.578270432" watchObservedRunningTime="2026-01-27 18:30:38.481549137 +0000 UTC m=+5613.580522686" Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.452181 5049 generic.go:334] "Generic (PLEG): container finished" podID="af843ebd-0b01-45b3-9520-6d9375d9edee" containerID="f55509a753c5725b4a1b08693bf525e4bfada980fe3a3250510c8a0ed11eece2" exitCode=0 Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.452393 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k9ljd" event={"ID":"af843ebd-0b01-45b3-9520-6d9375d9edee","Type":"ContainerDied","Data":"f55509a753c5725b4a1b08693bf525e4bfada980fe3a3250510c8a0ed11eece2"} Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.455240 5049 generic.go:334] "Generic (PLEG): container finished" podID="fb5fb317-bb87-451b-b0d3-696a9b32b1cb" containerID="0ca7b1bea2b2b7078eea98c9c9dce521fd838ea3cfe8a84e150c8ddc9d456419" exitCode=0 Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.455275 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6731-account-create-update-5jb56" event={"ID":"fb5fb317-bb87-451b-b0d3-696a9b32b1cb","Type":"ContainerDied","Data":"0ca7b1bea2b2b7078eea98c9c9dce521fd838ea3cfe8a84e150c8ddc9d456419"} Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.792583 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.903226 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af843ebd-0b01-45b3-9520-6d9375d9edee-operator-scripts\") pod \"af843ebd-0b01-45b3-9520-6d9375d9edee\" (UID: \"af843ebd-0b01-45b3-9520-6d9375d9edee\") " Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.903452 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6779\" (UniqueName: \"kubernetes.io/projected/af843ebd-0b01-45b3-9520-6d9375d9edee-kube-api-access-k6779\") pod \"af843ebd-0b01-45b3-9520-6d9375d9edee\" (UID: \"af843ebd-0b01-45b3-9520-6d9375d9edee\") " Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.904911 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af843ebd-0b01-45b3-9520-6d9375d9edee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af843ebd-0b01-45b3-9520-6d9375d9edee" (UID: "af843ebd-0b01-45b3-9520-6d9375d9edee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:39 crc kubenswrapper[5049]: I0127 18:30:39.910225 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af843ebd-0b01-45b3-9520-6d9375d9edee-kube-api-access-k6779" (OuterVolumeSpecName: "kube-api-access-k6779") pod "af843ebd-0b01-45b3-9520-6d9375d9edee" (UID: "af843ebd-0b01-45b3-9520-6d9375d9edee"). InnerVolumeSpecName "kube-api-access-k6779". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.005725 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6779\" (UniqueName: \"kubernetes.io/projected/af843ebd-0b01-45b3-9520-6d9375d9edee-kube-api-access-k6779\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.006066 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af843ebd-0b01-45b3-9520-6d9375d9edee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.465749 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k9ljd" event={"ID":"af843ebd-0b01-45b3-9520-6d9375d9edee","Type":"ContainerDied","Data":"7335b118228463ae4c0bcc4e89085fce18f79d0d39699dd35ed2bbd2f0ac1b56"} Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.465798 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k9ljd" Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.465833 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7335b118228463ae4c0bcc4e89085fce18f79d0d39699dd35ed2bbd2f0ac1b56" Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.859306 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.925414 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-operator-scripts\") pod \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\" (UID: \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\") " Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.925479 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbzp4\" (UniqueName: \"kubernetes.io/projected/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-kube-api-access-zbzp4\") pod \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\" (UID: \"fb5fb317-bb87-451b-b0d3-696a9b32b1cb\") " Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.926075 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb5fb317-bb87-451b-b0d3-696a9b32b1cb" (UID: "fb5fb317-bb87-451b-b0d3-696a9b32b1cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:40 crc kubenswrapper[5049]: I0127 18:30:40.932426 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-kube-api-access-zbzp4" (OuterVolumeSpecName: "kube-api-access-zbzp4") pod "fb5fb317-bb87-451b-b0d3-696a9b32b1cb" (UID: "fb5fb317-bb87-451b-b0d3-696a9b32b1cb"). InnerVolumeSpecName "kube-api-access-zbzp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:41 crc kubenswrapper[5049]: I0127 18:30:41.027994 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[5049]: I0127 18:30:41.028059 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbzp4\" (UniqueName: \"kubernetes.io/projected/fb5fb317-bb87-451b-b0d3-696a9b32b1cb-kube-api-access-zbzp4\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:41 crc kubenswrapper[5049]: I0127 18:30:41.475922 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6731-account-create-update-5jb56" event={"ID":"fb5fb317-bb87-451b-b0d3-696a9b32b1cb","Type":"ContainerDied","Data":"1c0b4be60c4e2e24897aaa35f131840b5052a88718526ad8ef7d6f24525c0477"} Jan 27 18:30:41 crc kubenswrapper[5049]: I0127 18:30:41.475995 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0b4be60c4e2e24897aaa35f131840b5052a88718526ad8ef7d6f24525c0477" Jan 27 18:30:41 crc kubenswrapper[5049]: I0127 18:30:41.477135 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6731-account-create-update-5jb56" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.405663 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-whn7b"] Jan 27 18:30:42 crc kubenswrapper[5049]: E0127 18:30:42.406631 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af843ebd-0b01-45b3-9520-6d9375d9edee" containerName="mariadb-database-create" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.406653 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="af843ebd-0b01-45b3-9520-6d9375d9edee" containerName="mariadb-database-create" Jan 27 18:30:42 crc kubenswrapper[5049]: E0127 18:30:42.406720 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5fb317-bb87-451b-b0d3-696a9b32b1cb" containerName="mariadb-account-create-update" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.406729 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5fb317-bb87-451b-b0d3-696a9b32b1cb" containerName="mariadb-account-create-update" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.406969 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5fb317-bb87-451b-b0d3-696a9b32b1cb" containerName="mariadb-account-create-update" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.406990 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="af843ebd-0b01-45b3-9520-6d9375d9edee" containerName="mariadb-database-create" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.407783 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.410653 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.410865 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.411604 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-v7r47" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.419014 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-whn7b"] Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.439736 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b5b8c95d9-xhmzb"] Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.441481 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.455651 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmq75\" (UniqueName: \"kubernetes.io/projected/85e646e4-1dd4-4feb-b585-e6e85dec1822-kube-api-access-hmq75\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.455702 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-config-data\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.455805 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-scripts\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.455857 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-combined-ca-bundle\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.455919 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e646e4-1dd4-4feb-b585-e6e85dec1822-logs\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.480639 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b5b8c95d9-xhmzb"] Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.558795 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-config-data\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.558866 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-config\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.558903 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.559034 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-scripts\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.559106 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-combined-ca-bundle\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.559393 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e646e4-1dd4-4feb-b585-e6e85dec1822-logs\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.559454 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.559888 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e646e4-1dd4-4feb-b585-e6e85dec1822-logs\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.560064 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-dns-svc\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.560974 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92g4s\" (UniqueName: \"kubernetes.io/projected/927e7562-d4d8-4472-870e-02473145a9b6-kube-api-access-92g4s\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.561099 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmq75\" (UniqueName: \"kubernetes.io/projected/85e646e4-1dd4-4feb-b585-e6e85dec1822-kube-api-access-hmq75\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.563529 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-config-data\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.566345 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-combined-ca-bundle\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.574095 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-scripts\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.580327 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmq75\" (UniqueName: \"kubernetes.io/projected/85e646e4-1dd4-4feb-b585-e6e85dec1822-kube-api-access-hmq75\") pod \"placement-db-sync-whn7b\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.663352 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.663421 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-dns-svc\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.663463 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92g4s\" (UniqueName: \"kubernetes.io/projected/927e7562-d4d8-4472-870e-02473145a9b6-kube-api-access-92g4s\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.663527 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-config\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.663564 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.664970 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-dns-svc\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.665036 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-sb\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.665126 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-nb\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.665720 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-config\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.683840 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92g4s\" (UniqueName: \"kubernetes.io/projected/927e7562-d4d8-4472-870e-02473145a9b6-kube-api-access-92g4s\") pod \"dnsmasq-dns-5b5b8c95d9-xhmzb\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.731849 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:42 crc kubenswrapper[5049]: I0127 18:30:42.770226 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:43 crc kubenswrapper[5049]: I0127 18:30:43.214404 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-whn7b"] Jan 27 18:30:43 crc kubenswrapper[5049]: W0127 18:30:43.215984 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85e646e4_1dd4_4feb_b585_e6e85dec1822.slice/crio-fceae26368ad6c3c0208cf6f03caa414b4dfb2947c793d73ed9cf195410f29a7 WatchSource:0}: Error finding container fceae26368ad6c3c0208cf6f03caa414b4dfb2947c793d73ed9cf195410f29a7: Status 404 returned error can't find the container with id fceae26368ad6c3c0208cf6f03caa414b4dfb2947c793d73ed9cf195410f29a7 Jan 27 18:30:43 crc kubenswrapper[5049]: I0127 18:30:43.319975 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b5b8c95d9-xhmzb"] Jan 27 18:30:43 crc kubenswrapper[5049]: I0127 18:30:43.499344 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" event={"ID":"927e7562-d4d8-4472-870e-02473145a9b6","Type":"ContainerStarted","Data":"2deb9010f83f6966958ff256c46b35f204ffd54220d45e486510a74913396a11"} Jan 27 18:30:43 crc kubenswrapper[5049]: I0127 18:30:43.501535 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-whn7b" event={"ID":"85e646e4-1dd4-4feb-b585-e6e85dec1822","Type":"ContainerStarted","Data":"e621beb8761f27433d89f78a7a7ac981e929460cd13a3ac9bc18b425354365f0"} Jan 27 18:30:43 crc kubenswrapper[5049]: I0127 18:30:43.501558 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-whn7b" event={"ID":"85e646e4-1dd4-4feb-b585-e6e85dec1822","Type":"ContainerStarted","Data":"fceae26368ad6c3c0208cf6f03caa414b4dfb2947c793d73ed9cf195410f29a7"} Jan 27 18:30:43 crc kubenswrapper[5049]: I0127 18:30:43.521970 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-whn7b" podStartSLOduration=1.521947504 podStartE2EDuration="1.521947504s" podCreationTimestamp="2026-01-27 18:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:43.517892029 +0000 UTC m=+5618.616865578" watchObservedRunningTime="2026-01-27 18:30:43.521947504 +0000 UTC m=+5618.620921053" Jan 27 18:30:44 crc kubenswrapper[5049]: I0127 18:30:44.514236 5049 generic.go:334] "Generic (PLEG): container finished" podID="927e7562-d4d8-4472-870e-02473145a9b6" containerID="4602dba74ca49afd5790026b0563d4e3682bbe94f5d2fb4f063cb51bb6dd55bd" exitCode=0 Jan 27 18:30:44 crc kubenswrapper[5049]: I0127 18:30:44.514393 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" event={"ID":"927e7562-d4d8-4472-870e-02473145a9b6","Type":"ContainerDied","Data":"4602dba74ca49afd5790026b0563d4e3682bbe94f5d2fb4f063cb51bb6dd55bd"} Jan 27 18:30:45 crc kubenswrapper[5049]: I0127 18:30:45.526259 5049 generic.go:334] "Generic (PLEG): container finished" podID="85e646e4-1dd4-4feb-b585-e6e85dec1822" containerID="e621beb8761f27433d89f78a7a7ac981e929460cd13a3ac9bc18b425354365f0" exitCode=0 Jan 27 18:30:45 crc kubenswrapper[5049]: I0127 18:30:45.526358 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-whn7b" event={"ID":"85e646e4-1dd4-4feb-b585-e6e85dec1822","Type":"ContainerDied","Data":"e621beb8761f27433d89f78a7a7ac981e929460cd13a3ac9bc18b425354365f0"} Jan 27 18:30:45 crc kubenswrapper[5049]: I0127 18:30:45.530045 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" event={"ID":"927e7562-d4d8-4472-870e-02473145a9b6","Type":"ContainerStarted","Data":"ab885909620c4798702b5c33c41b5fbbbfbc52faff7a3a42ac3d69c0ca2f1bea"} Jan 27 18:30:45 crc kubenswrapper[5049]: I0127 18:30:45.530234 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:45 crc kubenswrapper[5049]: I0127 18:30:45.576372 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" podStartSLOduration=3.576350857 podStartE2EDuration="3.576350857s" podCreationTimestamp="2026-01-27 18:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:45.573109465 +0000 UTC m=+5620.672083034" watchObservedRunningTime="2026-01-27 18:30:45.576350857 +0000 UTC m=+5620.675324406" Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.878119 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.953816 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-scripts\") pod \"85e646e4-1dd4-4feb-b585-e6e85dec1822\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.953942 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-combined-ca-bundle\") pod \"85e646e4-1dd4-4feb-b585-e6e85dec1822\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.954247 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmq75\" (UniqueName: \"kubernetes.io/projected/85e646e4-1dd4-4feb-b585-e6e85dec1822-kube-api-access-hmq75\") pod \"85e646e4-1dd4-4feb-b585-e6e85dec1822\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.954309 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e646e4-1dd4-4feb-b585-e6e85dec1822-logs\") pod \"85e646e4-1dd4-4feb-b585-e6e85dec1822\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.954347 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-config-data\") pod \"85e646e4-1dd4-4feb-b585-e6e85dec1822\" (UID: \"85e646e4-1dd4-4feb-b585-e6e85dec1822\") " Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.954823 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85e646e4-1dd4-4feb-b585-e6e85dec1822-logs" (OuterVolumeSpecName: "logs") pod "85e646e4-1dd4-4feb-b585-e6e85dec1822" (UID: "85e646e4-1dd4-4feb-b585-e6e85dec1822"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.963577 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-scripts" (OuterVolumeSpecName: "scripts") pod "85e646e4-1dd4-4feb-b585-e6e85dec1822" (UID: "85e646e4-1dd4-4feb-b585-e6e85dec1822"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.966077 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e646e4-1dd4-4feb-b585-e6e85dec1822-kube-api-access-hmq75" (OuterVolumeSpecName: "kube-api-access-hmq75") pod "85e646e4-1dd4-4feb-b585-e6e85dec1822" (UID: "85e646e4-1dd4-4feb-b585-e6e85dec1822"). InnerVolumeSpecName "kube-api-access-hmq75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.987485 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-config-data" (OuterVolumeSpecName: "config-data") pod "85e646e4-1dd4-4feb-b585-e6e85dec1822" (UID: "85e646e4-1dd4-4feb-b585-e6e85dec1822"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:46 crc kubenswrapper[5049]: I0127 18:30:46.988142 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85e646e4-1dd4-4feb-b585-e6e85dec1822" (UID: "85e646e4-1dd4-4feb-b585-e6e85dec1822"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.058158 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85e646e4-1dd4-4feb-b585-e6e85dec1822-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.058203 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.058216 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.058228 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e646e4-1dd4-4feb-b585-e6e85dec1822-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.058239 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmq75\" (UniqueName: \"kubernetes.io/projected/85e646e4-1dd4-4feb-b585-e6e85dec1822-kube-api-access-hmq75\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.551029 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-whn7b" event={"ID":"85e646e4-1dd4-4feb-b585-e6e85dec1822","Type":"ContainerDied","Data":"fceae26368ad6c3c0208cf6f03caa414b4dfb2947c793d73ed9cf195410f29a7"} Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.551506 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fceae26368ad6c3c0208cf6f03caa414b4dfb2947c793d73ed9cf195410f29a7" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.551095 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-whn7b" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.732361 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-79994c87b8-v7w7s"] Jan 27 18:30:47 crc kubenswrapper[5049]: E0127 18:30:47.732938 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e646e4-1dd4-4feb-b585-e6e85dec1822" containerName="placement-db-sync" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.732960 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e646e4-1dd4-4feb-b585-e6e85dec1822" containerName="placement-db-sync" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.736518 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e646e4-1dd4-4feb-b585-e6e85dec1822" containerName="placement-db-sync" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.746256 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.752264 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.752561 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-v7r47" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.752628 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.755900 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79994c87b8-v7w7s"] Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.775134 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-combined-ca-bundle\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.775512 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-logs\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.775721 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lgb\" (UniqueName: \"kubernetes.io/projected/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-kube-api-access-q6lgb\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.775856 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-scripts\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.776016 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-config-data\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.879111 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-config-data\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.879283 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-combined-ca-bundle\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.879348 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-logs\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.879424 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6lgb\" (UniqueName: \"kubernetes.io/projected/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-kube-api-access-q6lgb\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.879465 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-scripts\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.880449 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-logs\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.885822 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-scripts\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.886329 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-config-data\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.894885 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-combined-ca-bundle\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:47 crc kubenswrapper[5049]: I0127 18:30:47.900053 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6lgb\" (UniqueName: \"kubernetes.io/projected/45e8b898-a4dd-4f5c-be7d-e849fd6530ec-kube-api-access-q6lgb\") pod \"placement-79994c87b8-v7w7s\" (UID: \"45e8b898-a4dd-4f5c-be7d-e849fd6530ec\") " pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:48 crc kubenswrapper[5049]: I0127 18:30:48.089527 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:48 crc kubenswrapper[5049]: I0127 18:30:48.521239 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-79994c87b8-v7w7s"] Jan 27 18:30:48 crc kubenswrapper[5049]: W0127 18:30:48.527349 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45e8b898_a4dd_4f5c_be7d_e849fd6530ec.slice/crio-b0861ba6b9ca1d1217caa9ed9eac05a5138dad906196f6a0338446291908e6f7 WatchSource:0}: Error finding container b0861ba6b9ca1d1217caa9ed9eac05a5138dad906196f6a0338446291908e6f7: Status 404 returned error can't find the container with id b0861ba6b9ca1d1217caa9ed9eac05a5138dad906196f6a0338446291908e6f7 Jan 27 18:30:48 crc kubenswrapper[5049]: I0127 18:30:48.558993 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79994c87b8-v7w7s" event={"ID":"45e8b898-a4dd-4f5c-be7d-e849fd6530ec","Type":"ContainerStarted","Data":"b0861ba6b9ca1d1217caa9ed9eac05a5138dad906196f6a0338446291908e6f7"} Jan 27 18:30:49 crc kubenswrapper[5049]: I0127 18:30:49.568980 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79994c87b8-v7w7s" event={"ID":"45e8b898-a4dd-4f5c-be7d-e849fd6530ec","Type":"ContainerStarted","Data":"022cda72fd90a435bc2eb472df3e6c812932660083b4ceff429cb677d9947334"} Jan 27 18:30:49 crc kubenswrapper[5049]: I0127 18:30:49.569479 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-79994c87b8-v7w7s" event={"ID":"45e8b898-a4dd-4f5c-be7d-e849fd6530ec","Type":"ContainerStarted","Data":"7028953f6082dcaed492014f4a32debf9cadaf5db99b58c2ddbef84a05b88bb0"} Jan 27 18:30:49 crc kubenswrapper[5049]: I0127 18:30:49.569499 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:49 crc kubenswrapper[5049]: I0127 18:30:49.569516 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:30:49 crc kubenswrapper[5049]: I0127 18:30:49.589195 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-79994c87b8-v7w7s" podStartSLOduration=2.589147682 podStartE2EDuration="2.589147682s" podCreationTimestamp="2026-01-27 18:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:30:49.587049863 +0000 UTC m=+5624.686023452" watchObservedRunningTime="2026-01-27 18:30:49.589147682 +0000 UTC m=+5624.688121251" Jan 27 18:30:52 crc kubenswrapper[5049]: I0127 18:30:52.771879 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:30:52 crc kubenswrapper[5049]: I0127 18:30:52.832869 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d7646d5bc-qjlng"] Jan 27 18:30:52 crc kubenswrapper[5049]: I0127 18:30:52.833105 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" podUID="9d7a32a7-3cdd-440b-bd73-080c07057dad" containerName="dnsmasq-dns" containerID="cri-o://75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a" gracePeriod=10 Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.330379 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.383853 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-dns-svc\") pod \"9d7a32a7-3cdd-440b-bd73-080c07057dad\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.383920 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2mnt\" (UniqueName: \"kubernetes.io/projected/9d7a32a7-3cdd-440b-bd73-080c07057dad-kube-api-access-m2mnt\") pod \"9d7a32a7-3cdd-440b-bd73-080c07057dad\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.383967 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-sb\") pod \"9d7a32a7-3cdd-440b-bd73-080c07057dad\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.384113 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-nb\") pod \"9d7a32a7-3cdd-440b-bd73-080c07057dad\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.384137 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-config\") pod \"9d7a32a7-3cdd-440b-bd73-080c07057dad\" (UID: \"9d7a32a7-3cdd-440b-bd73-080c07057dad\") " Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.396655 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d7a32a7-3cdd-440b-bd73-080c07057dad-kube-api-access-m2mnt" (OuterVolumeSpecName: "kube-api-access-m2mnt") pod "9d7a32a7-3cdd-440b-bd73-080c07057dad" (UID: "9d7a32a7-3cdd-440b-bd73-080c07057dad"). InnerVolumeSpecName "kube-api-access-m2mnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.433648 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d7a32a7-3cdd-440b-bd73-080c07057dad" (UID: "9d7a32a7-3cdd-440b-bd73-080c07057dad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.439715 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d7a32a7-3cdd-440b-bd73-080c07057dad" (UID: "9d7a32a7-3cdd-440b-bd73-080c07057dad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.447254 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-config" (OuterVolumeSpecName: "config") pod "9d7a32a7-3cdd-440b-bd73-080c07057dad" (UID: "9d7a32a7-3cdd-440b-bd73-080c07057dad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.453921 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d7a32a7-3cdd-440b-bd73-080c07057dad" (UID: "9d7a32a7-3cdd-440b-bd73-080c07057dad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.485606 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.485636 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.485645 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.485667 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2mnt\" (UniqueName: \"kubernetes.io/projected/9d7a32a7-3cdd-440b-bd73-080c07057dad-kube-api-access-m2mnt\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.485697 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d7a32a7-3cdd-440b-bd73-080c07057dad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.629141 5049 generic.go:334] "Generic (PLEG): container finished" podID="9d7a32a7-3cdd-440b-bd73-080c07057dad" containerID="75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a" exitCode=0 Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.629574 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" event={"ID":"9d7a32a7-3cdd-440b-bd73-080c07057dad","Type":"ContainerDied","Data":"75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a"} Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.629880 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" event={"ID":"9d7a32a7-3cdd-440b-bd73-080c07057dad","Type":"ContainerDied","Data":"7f05ddb2b6c1cad8fa16597adf8ef8882cbfe0e14250fd2d6b45a8b70deed565"} Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.629626 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d7646d5bc-qjlng" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.630058 5049 scope.go:117] "RemoveContainer" containerID="75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.683343 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d7646d5bc-qjlng"] Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.685279 5049 scope.go:117] "RemoveContainer" containerID="8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.693012 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d7646d5bc-qjlng"] Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.719637 5049 scope.go:117] "RemoveContainer" containerID="75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a" Jan 27 18:30:53 crc kubenswrapper[5049]: E0127 18:30:53.720117 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a\": container with ID starting with 75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a not found: ID does not exist" containerID="75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.720166 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a"} err="failed to get container status \"75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a\": rpc error: code = NotFound desc = could not find container \"75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a\": container with ID starting with 75d66a6574973e8a94d5a15850bf669b4058a851009beb21db4d9ab3c841886a not found: ID does not exist" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.720207 5049 scope.go:117] "RemoveContainer" containerID="8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32" Jan 27 18:30:53 crc kubenswrapper[5049]: E0127 18:30:53.720906 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32\": container with ID starting with 8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32 not found: ID does not exist" containerID="8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32" Jan 27 18:30:53 crc kubenswrapper[5049]: I0127 18:30:53.720947 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32"} err="failed to get container status \"8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32\": rpc error: code = NotFound desc = could not find container \"8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32\": container with ID starting with 8e1dbdf746d94c3a044d07d3e9d353212dea0c5bbc5cb8fe28167952f4ce2a32 not found: ID does not exist" Jan 27 18:30:55 crc kubenswrapper[5049]: I0127 18:30:55.658621 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d7a32a7-3cdd-440b-bd73-080c07057dad" path="/var/lib/kubelet/pods/9d7a32a7-3cdd-440b-bd73-080c07057dad/volumes" Jan 27 18:31:16 crc kubenswrapper[5049]: I0127 18:31:16.806431 5049 scope.go:117] "RemoveContainer" containerID="41c610df21115985f25c803b50a8e71c2d2b8a61ac054a25b002ede233d67a64" Jan 27 18:31:16 crc kubenswrapper[5049]: I0127 18:31:16.838960 5049 scope.go:117] "RemoveContainer" containerID="0ed14933c97394a1e2e8695967429081e4955e395cc27541f802de4720a1177a" Jan 27 18:31:19 crc kubenswrapper[5049]: I0127 18:31:19.553548 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:31:19 crc kubenswrapper[5049]: I0127 18:31:19.554121 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-79994c87b8-v7w7s" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.114713 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9dchc"] Jan 27 18:31:44 crc kubenswrapper[5049]: E0127 18:31:44.115841 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d7a32a7-3cdd-440b-bd73-080c07057dad" containerName="dnsmasq-dns" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.115863 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7a32a7-3cdd-440b-bd73-080c07057dad" containerName="dnsmasq-dns" Jan 27 18:31:44 crc kubenswrapper[5049]: E0127 18:31:44.115902 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d7a32a7-3cdd-440b-bd73-080c07057dad" containerName="init" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.115912 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7a32a7-3cdd-440b-bd73-080c07057dad" containerName="init" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.116157 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d7a32a7-3cdd-440b-bd73-080c07057dad" containerName="dnsmasq-dns" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.116913 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.134801 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9dchc"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.199063 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bg748"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.200542 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.210248 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrqc\" (UniqueName: \"kubernetes.io/projected/a575262a-6daf-48b8-a260-386421b4d4bc-kube-api-access-thrqc\") pod \"nova-api-db-create-9dchc\" (UID: \"a575262a-6daf-48b8-a260-386421b4d4bc\") " pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.210368 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a575262a-6daf-48b8-a260-386421b4d4bc-operator-scripts\") pod \"nova-api-db-create-9dchc\" (UID: \"a575262a-6daf-48b8-a260-386421b4d4bc\") " pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.259192 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bg748"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.308535 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7h4kt"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.311506 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a575262a-6daf-48b8-a260-386421b4d4bc-operator-scripts\") pod \"nova-api-db-create-9dchc\" (UID: \"a575262a-6daf-48b8-a260-386421b4d4bc\") " pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.311570 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea96bc12-3a0b-4587-bcb1-ce13464facd7-operator-scripts\") pod \"nova-cell0-db-create-bg748\" (UID: \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\") " pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.311664 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlrrf\" (UniqueName: \"kubernetes.io/projected/ea96bc12-3a0b-4587-bcb1-ce13464facd7-kube-api-access-vlrrf\") pod \"nova-cell0-db-create-bg748\" (UID: \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\") " pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.311779 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrqc\" (UniqueName: \"kubernetes.io/projected/a575262a-6daf-48b8-a260-386421b4d4bc-kube-api-access-thrqc\") pod \"nova-api-db-create-9dchc\" (UID: \"a575262a-6daf-48b8-a260-386421b4d4bc\") " pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.311910 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.312500 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a575262a-6daf-48b8-a260-386421b4d4bc-operator-scripts\") pod \"nova-api-db-create-9dchc\" (UID: \"a575262a-6daf-48b8-a260-386421b4d4bc\") " pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.322947 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6831-account-create-update-8stn7"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.324348 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.326454 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.364661 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrqc\" (UniqueName: \"kubernetes.io/projected/a575262a-6daf-48b8-a260-386421b4d4bc-kube-api-access-thrqc\") pod \"nova-api-db-create-9dchc\" (UID: \"a575262a-6daf-48b8-a260-386421b4d4bc\") " pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.365142 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7h4kt"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.395353 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6831-account-create-update-8stn7"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.414024 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea96bc12-3a0b-4587-bcb1-ce13464facd7-operator-scripts\") pod \"nova-cell0-db-create-bg748\" (UID: \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\") " pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.414114 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhvzk\" (UniqueName: \"kubernetes.io/projected/7c5a8837-eb01-432c-bb15-d4f4ff541037-kube-api-access-dhvzk\") pod \"nova-cell1-db-create-7h4kt\" (UID: \"7c5a8837-eb01-432c-bb15-d4f4ff541037\") " pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.414175 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlrrf\" (UniqueName: \"kubernetes.io/projected/ea96bc12-3a0b-4587-bcb1-ce13464facd7-kube-api-access-vlrrf\") pod \"nova-cell0-db-create-bg748\" (UID: \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\") " pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.414289 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5a8837-eb01-432c-bb15-d4f4ff541037-operator-scripts\") pod \"nova-cell1-db-create-7h4kt\" (UID: \"7c5a8837-eb01-432c-bb15-d4f4ff541037\") " pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.414309 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997aff65-10ec-4070-b64e-34a4e434dde9-operator-scripts\") pod \"nova-api-6831-account-create-update-8stn7\" (UID: \"997aff65-10ec-4070-b64e-34a4e434dde9\") " pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.414359 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdskl\" (UniqueName: \"kubernetes.io/projected/997aff65-10ec-4070-b64e-34a4e434dde9-kube-api-access-sdskl\") pod \"nova-api-6831-account-create-update-8stn7\" (UID: \"997aff65-10ec-4070-b64e-34a4e434dde9\") " pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.421090 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea96bc12-3a0b-4587-bcb1-ce13464facd7-operator-scripts\") pod \"nova-cell0-db-create-bg748\" (UID: \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\") " pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.443956 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlrrf\" (UniqueName: \"kubernetes.io/projected/ea96bc12-3a0b-4587-bcb1-ce13464facd7-kube-api-access-vlrrf\") pod \"nova-cell0-db-create-bg748\" (UID: \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\") " pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.456991 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.515103 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8dc3-account-create-update-lbgkl"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.517819 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.519461 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.524300 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.553661 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdskl\" (UniqueName: \"kubernetes.io/projected/997aff65-10ec-4070-b64e-34a4e434dde9-kube-api-access-sdskl\") pod \"nova-api-6831-account-create-update-8stn7\" (UID: \"997aff65-10ec-4070-b64e-34a4e434dde9\") " pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.516883 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdskl\" (UniqueName: \"kubernetes.io/projected/997aff65-10ec-4070-b64e-34a4e434dde9-kube-api-access-sdskl\") pod \"nova-api-6831-account-create-update-8stn7\" (UID: \"997aff65-10ec-4070-b64e-34a4e434dde9\") " pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.563652 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhvzk\" (UniqueName: \"kubernetes.io/projected/7c5a8837-eb01-432c-bb15-d4f4ff541037-kube-api-access-dhvzk\") pod \"nova-cell1-db-create-7h4kt\" (UID: \"7c5a8837-eb01-432c-bb15-d4f4ff541037\") " pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.564272 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5a8837-eb01-432c-bb15-d4f4ff541037-operator-scripts\") pod \"nova-cell1-db-create-7h4kt\" (UID: \"7c5a8837-eb01-432c-bb15-d4f4ff541037\") " pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.564418 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997aff65-10ec-4070-b64e-34a4e434dde9-operator-scripts\") pod \"nova-api-6831-account-create-update-8stn7\" (UID: \"997aff65-10ec-4070-b64e-34a4e434dde9\") " pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.565851 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997aff65-10ec-4070-b64e-34a4e434dde9-operator-scripts\") pod \"nova-api-6831-account-create-update-8stn7\" (UID: \"997aff65-10ec-4070-b64e-34a4e434dde9\") " pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.566613 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8dc3-account-create-update-lbgkl"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.567220 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5a8837-eb01-432c-bb15-d4f4ff541037-operator-scripts\") pod \"nova-cell1-db-create-7h4kt\" (UID: \"7c5a8837-eb01-432c-bb15-d4f4ff541037\") " pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.588898 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhvzk\" (UniqueName: \"kubernetes.io/projected/7c5a8837-eb01-432c-bb15-d4f4ff541037-kube-api-access-dhvzk\") pod \"nova-cell1-db-create-7h4kt\" (UID: \"7c5a8837-eb01-432c-bb15-d4f4ff541037\") " pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.632652 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.644905 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.666901 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa894f99-836d-4f71-853c-a90e0a049382-operator-scripts\") pod \"nova-cell0-8dc3-account-create-update-lbgkl\" (UID: \"fa894f99-836d-4f71-853c-a90e0a049382\") " pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.667123 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvdc\" (UniqueName: \"kubernetes.io/projected/fa894f99-836d-4f71-853c-a90e0a049382-kube-api-access-fzvdc\") pod \"nova-cell0-8dc3-account-create-update-lbgkl\" (UID: \"fa894f99-836d-4f71-853c-a90e0a049382\") " pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.731052 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b44a-account-create-update-4hmwk"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.732967 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.736876 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.745641 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b44a-account-create-update-4hmwk"] Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.773051 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39c71ed-9976-4656-a5cd-16b9a340d80e-operator-scripts\") pod \"nova-cell1-b44a-account-create-update-4hmwk\" (UID: \"b39c71ed-9976-4656-a5cd-16b9a340d80e\") " pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.773121 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvdc\" (UniqueName: \"kubernetes.io/projected/fa894f99-836d-4f71-853c-a90e0a049382-kube-api-access-fzvdc\") pod \"nova-cell0-8dc3-account-create-update-lbgkl\" (UID: \"fa894f99-836d-4f71-853c-a90e0a049382\") " pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.773188 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjghv\" (UniqueName: \"kubernetes.io/projected/b39c71ed-9976-4656-a5cd-16b9a340d80e-kube-api-access-qjghv\") pod \"nova-cell1-b44a-account-create-update-4hmwk\" (UID: \"b39c71ed-9976-4656-a5cd-16b9a340d80e\") " pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.773409 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa894f99-836d-4f71-853c-a90e0a049382-operator-scripts\") pod \"nova-cell0-8dc3-account-create-update-lbgkl\" (UID: \"fa894f99-836d-4f71-853c-a90e0a049382\") " pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.774268 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa894f99-836d-4f71-853c-a90e0a049382-operator-scripts\") pod \"nova-cell0-8dc3-account-create-update-lbgkl\" (UID: \"fa894f99-836d-4f71-853c-a90e0a049382\") " pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.794007 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvdc\" (UniqueName: \"kubernetes.io/projected/fa894f99-836d-4f71-853c-a90e0a049382-kube-api-access-fzvdc\") pod \"nova-cell0-8dc3-account-create-update-lbgkl\" (UID: \"fa894f99-836d-4f71-853c-a90e0a049382\") " pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.875108 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjghv\" (UniqueName: \"kubernetes.io/projected/b39c71ed-9976-4656-a5cd-16b9a340d80e-kube-api-access-qjghv\") pod \"nova-cell1-b44a-account-create-update-4hmwk\" (UID: \"b39c71ed-9976-4656-a5cd-16b9a340d80e\") " pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.875924 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39c71ed-9976-4656-a5cd-16b9a340d80e-operator-scripts\") pod \"nova-cell1-b44a-account-create-update-4hmwk\" (UID: \"b39c71ed-9976-4656-a5cd-16b9a340d80e\") " pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.876811 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39c71ed-9976-4656-a5cd-16b9a340d80e-operator-scripts\") pod \"nova-cell1-b44a-account-create-update-4hmwk\" (UID: \"b39c71ed-9976-4656-a5cd-16b9a340d80e\") " pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.896612 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjghv\" (UniqueName: \"kubernetes.io/projected/b39c71ed-9976-4656-a5cd-16b9a340d80e-kube-api-access-qjghv\") pod \"nova-cell1-b44a-account-create-update-4hmwk\" (UID: \"b39c71ed-9976-4656-a5cd-16b9a340d80e\") " pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:44 crc kubenswrapper[5049]: I0127 18:31:44.934472 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:45 crc kubenswrapper[5049]: I0127 18:31:45.016415 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9dchc"] Jan 27 18:31:45 crc kubenswrapper[5049]: W0127 18:31:45.023743 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda575262a_6daf_48b8_a260_386421b4d4bc.slice/crio-c12aeb6f1c41e91a0188ebde6259437d7b77ddbfda5bde77249bc456120a1871 WatchSource:0}: Error finding container c12aeb6f1c41e91a0188ebde6259437d7b77ddbfda5bde77249bc456120a1871: Status 404 returned error can't find the container with id c12aeb6f1c41e91a0188ebde6259437d7b77ddbfda5bde77249bc456120a1871 Jan 27 18:31:45 crc kubenswrapper[5049]: I0127 18:31:45.055064 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:45 crc kubenswrapper[5049]: I0127 18:31:45.126689 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9dchc" event={"ID":"a575262a-6daf-48b8-a260-386421b4d4bc","Type":"ContainerStarted","Data":"c12aeb6f1c41e91a0188ebde6259437d7b77ddbfda5bde77249bc456120a1871"} Jan 27 18:31:45 crc kubenswrapper[5049]: I0127 18:31:45.173128 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bg748"] Jan 27 18:31:45 crc kubenswrapper[5049]: I0127 18:31:45.292453 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6831-account-create-update-8stn7"] Jan 27 18:31:45 crc kubenswrapper[5049]: I0127 18:31:45.303799 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7h4kt"] Jan 27 18:31:45 crc kubenswrapper[5049]: W0127 18:31:45.312490 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c5a8837_eb01_432c_bb15_d4f4ff541037.slice/crio-b37bd91385eb13c0f9f819b6ea4304fbd761f065212ba71de948192071442f6a WatchSource:0}: Error finding container b37bd91385eb13c0f9f819b6ea4304fbd761f065212ba71de948192071442f6a: Status 404 returned error can't find the container with id b37bd91385eb13c0f9f819b6ea4304fbd761f065212ba71de948192071442f6a Jan 27 18:31:45 crc kubenswrapper[5049]: I0127 18:31:45.472380 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8dc3-account-create-update-lbgkl"] Jan 27 18:31:45 crc kubenswrapper[5049]: I0127 18:31:45.606225 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b44a-account-create-update-4hmwk"] Jan 27 18:31:45 crc kubenswrapper[5049]: W0127 18:31:45.696378 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb39c71ed_9976_4656_a5cd_16b9a340d80e.slice/crio-4a1792a610a4c74fcd31f65b2148048bce19589e231bc5c5616c628070f3136c WatchSource:0}: Error finding container 4a1792a610a4c74fcd31f65b2148048bce19589e231bc5c5616c628070f3136c: Status 404 returned error can't find the container with id 4a1792a610a4c74fcd31f65b2148048bce19589e231bc5c5616c628070f3136c Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.136256 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7h4kt" event={"ID":"7c5a8837-eb01-432c-bb15-d4f4ff541037","Type":"ContainerStarted","Data":"aed5119a51351316143ef9f1ba64f4a18eb589ce36185c2aa03cfffdb8b6861a"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.136865 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7h4kt" event={"ID":"7c5a8837-eb01-432c-bb15-d4f4ff541037","Type":"ContainerStarted","Data":"b37bd91385eb13c0f9f819b6ea4304fbd761f065212ba71de948192071442f6a"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.139994 5049 generic.go:334] "Generic (PLEG): container finished" podID="a575262a-6daf-48b8-a260-386421b4d4bc" containerID="3f44d425504126e2ae93492477c7492ea8dc2ba92bff8d3b5d011e8d38266140" exitCode=0 Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.140078 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9dchc" event={"ID":"a575262a-6daf-48b8-a260-386421b4d4bc","Type":"ContainerDied","Data":"3f44d425504126e2ae93492477c7492ea8dc2ba92bff8d3b5d011e8d38266140"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.142092 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" event={"ID":"fa894f99-836d-4f71-853c-a90e0a049382","Type":"ContainerStarted","Data":"9a518cd8b0f60726222af34c220957d9be5854681d8b0f1ceab2cf5173ea31dd"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.142134 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" event={"ID":"fa894f99-836d-4f71-853c-a90e0a049382","Type":"ContainerStarted","Data":"11680cc75ce110d79d649f712820bcb286901213570ca755540dad12f25f6608"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.145072 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6831-account-create-update-8stn7" event={"ID":"997aff65-10ec-4070-b64e-34a4e434dde9","Type":"ContainerStarted","Data":"da8b1747c79193f3804a359e558e2984afd0b675a51e4165dbb48d64c3d74bc7"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.145110 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6831-account-create-update-8stn7" event={"ID":"997aff65-10ec-4070-b64e-34a4e434dde9","Type":"ContainerStarted","Data":"0915920304fee1f9bcf8a740d0d381dd12dd291254eba29d517bee4ea1bca2c1"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.146971 5049 generic.go:334] "Generic (PLEG): container finished" podID="ea96bc12-3a0b-4587-bcb1-ce13464facd7" containerID="f96d0b91976932537c5d88df64e2600cbcfa5288b1c40fa840fa60285eb64256" exitCode=0 Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.147038 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bg748" event={"ID":"ea96bc12-3a0b-4587-bcb1-ce13464facd7","Type":"ContainerDied","Data":"f96d0b91976932537c5d88df64e2600cbcfa5288b1c40fa840fa60285eb64256"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.147060 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bg748" event={"ID":"ea96bc12-3a0b-4587-bcb1-ce13464facd7","Type":"ContainerStarted","Data":"3f2485bdbfa5a6586e883ffd6f42e64c5fccbcca03bd8335b10fd56a7f213f8c"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.148728 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" event={"ID":"b39c71ed-9976-4656-a5cd-16b9a340d80e","Type":"ContainerStarted","Data":"4264ca35f7db40338b0d6745f81851d74e250a0db4f408edb773b3c73cc91844"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.148757 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" event={"ID":"b39c71ed-9976-4656-a5cd-16b9a340d80e","Type":"ContainerStarted","Data":"4a1792a610a4c74fcd31f65b2148048bce19589e231bc5c5616c628070f3136c"} Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.157727 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-7h4kt" podStartSLOduration=2.157702039 podStartE2EDuration="2.157702039s" podCreationTimestamp="2026-01-27 18:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:46.153961083 +0000 UTC m=+5681.252934632" watchObservedRunningTime="2026-01-27 18:31:46.157702039 +0000 UTC m=+5681.256675588" Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.189248 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-6831-account-create-update-8stn7" podStartSLOduration=2.189226729 podStartE2EDuration="2.189226729s" podCreationTimestamp="2026-01-27 18:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:46.18113539 +0000 UTC m=+5681.280108939" watchObservedRunningTime="2026-01-27 18:31:46.189226729 +0000 UTC m=+5681.288200268" Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.212096 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" podStartSLOduration=2.212079434 podStartE2EDuration="2.212079434s" podCreationTimestamp="2026-01-27 18:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:46.206421384 +0000 UTC m=+5681.305394933" watchObservedRunningTime="2026-01-27 18:31:46.212079434 +0000 UTC m=+5681.311052983" Jan 27 18:31:46 crc kubenswrapper[5049]: I0127 18:31:46.230792 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" podStartSLOduration=2.230768362 podStartE2EDuration="2.230768362s" podCreationTimestamp="2026-01-27 18:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:46.223321321 +0000 UTC m=+5681.322294870" watchObservedRunningTime="2026-01-27 18:31:46.230768362 +0000 UTC m=+5681.329741911" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.160911 5049 generic.go:334] "Generic (PLEG): container finished" podID="997aff65-10ec-4070-b64e-34a4e434dde9" containerID="da8b1747c79193f3804a359e558e2984afd0b675a51e4165dbb48d64c3d74bc7" exitCode=0 Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.160994 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6831-account-create-update-8stn7" event={"ID":"997aff65-10ec-4070-b64e-34a4e434dde9","Type":"ContainerDied","Data":"da8b1747c79193f3804a359e558e2984afd0b675a51e4165dbb48d64c3d74bc7"} Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.163662 5049 generic.go:334] "Generic (PLEG): container finished" podID="b39c71ed-9976-4656-a5cd-16b9a340d80e" containerID="4264ca35f7db40338b0d6745f81851d74e250a0db4f408edb773b3c73cc91844" exitCode=0 Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.163746 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" event={"ID":"b39c71ed-9976-4656-a5cd-16b9a340d80e","Type":"ContainerDied","Data":"4264ca35f7db40338b0d6745f81851d74e250a0db4f408edb773b3c73cc91844"} Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.165157 5049 generic.go:334] "Generic (PLEG): container finished" podID="7c5a8837-eb01-432c-bb15-d4f4ff541037" containerID="aed5119a51351316143ef9f1ba64f4a18eb589ce36185c2aa03cfffdb8b6861a" exitCode=0 Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.165248 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7h4kt" event={"ID":"7c5a8837-eb01-432c-bb15-d4f4ff541037","Type":"ContainerDied","Data":"aed5119a51351316143ef9f1ba64f4a18eb589ce36185c2aa03cfffdb8b6861a"} Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.166840 5049 generic.go:334] "Generic (PLEG): container finished" podID="fa894f99-836d-4f71-853c-a90e0a049382" containerID="9a518cd8b0f60726222af34c220957d9be5854681d8b0f1ceab2cf5173ea31dd" exitCode=0 Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.166915 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" event={"ID":"fa894f99-836d-4f71-853c-a90e0a049382","Type":"ContainerDied","Data":"9a518cd8b0f60726222af34c220957d9be5854681d8b0f1ceab2cf5173ea31dd"} Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.639895 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.646772 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.731473 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea96bc12-3a0b-4587-bcb1-ce13464facd7-operator-scripts\") pod \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\" (UID: \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\") " Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.731620 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlrrf\" (UniqueName: \"kubernetes.io/projected/ea96bc12-3a0b-4587-bcb1-ce13464facd7-kube-api-access-vlrrf\") pod \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\" (UID: \"ea96bc12-3a0b-4587-bcb1-ce13464facd7\") " Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.731695 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thrqc\" (UniqueName: \"kubernetes.io/projected/a575262a-6daf-48b8-a260-386421b4d4bc-kube-api-access-thrqc\") pod \"a575262a-6daf-48b8-a260-386421b4d4bc\" (UID: \"a575262a-6daf-48b8-a260-386421b4d4bc\") " Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.731761 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a575262a-6daf-48b8-a260-386421b4d4bc-operator-scripts\") pod \"a575262a-6daf-48b8-a260-386421b4d4bc\" (UID: \"a575262a-6daf-48b8-a260-386421b4d4bc\") " Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.732240 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea96bc12-3a0b-4587-bcb1-ce13464facd7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea96bc12-3a0b-4587-bcb1-ce13464facd7" (UID: "ea96bc12-3a0b-4587-bcb1-ce13464facd7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.732614 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a575262a-6daf-48b8-a260-386421b4d4bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a575262a-6daf-48b8-a260-386421b4d4bc" (UID: "a575262a-6daf-48b8-a260-386421b4d4bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.740336 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea96bc12-3a0b-4587-bcb1-ce13464facd7-kube-api-access-vlrrf" (OuterVolumeSpecName: "kube-api-access-vlrrf") pod "ea96bc12-3a0b-4587-bcb1-ce13464facd7" (UID: "ea96bc12-3a0b-4587-bcb1-ce13464facd7"). InnerVolumeSpecName "kube-api-access-vlrrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.742285 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a575262a-6daf-48b8-a260-386421b4d4bc-kube-api-access-thrqc" (OuterVolumeSpecName: "kube-api-access-thrqc") pod "a575262a-6daf-48b8-a260-386421b4d4bc" (UID: "a575262a-6daf-48b8-a260-386421b4d4bc"). InnerVolumeSpecName "kube-api-access-thrqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.833520 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea96bc12-3a0b-4587-bcb1-ce13464facd7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.833565 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlrrf\" (UniqueName: \"kubernetes.io/projected/ea96bc12-3a0b-4587-bcb1-ce13464facd7-kube-api-access-vlrrf\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.833580 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thrqc\" (UniqueName: \"kubernetes.io/projected/a575262a-6daf-48b8-a260-386421b4d4bc-kube-api-access-thrqc\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:47 crc kubenswrapper[5049]: I0127 18:31:47.833592 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a575262a-6daf-48b8-a260-386421b4d4bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.177217 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9dchc" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.177193 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9dchc" event={"ID":"a575262a-6daf-48b8-a260-386421b4d4bc","Type":"ContainerDied","Data":"c12aeb6f1c41e91a0188ebde6259437d7b77ddbfda5bde77249bc456120a1871"} Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.177361 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12aeb6f1c41e91a0188ebde6259437d7b77ddbfda5bde77249bc456120a1871" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.179047 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bg748" event={"ID":"ea96bc12-3a0b-4587-bcb1-ce13464facd7","Type":"ContainerDied","Data":"3f2485bdbfa5a6586e883ffd6f42e64c5fccbcca03bd8335b10fd56a7f213f8c"} Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.179067 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bg748" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.179070 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f2485bdbfa5a6586e883ffd6f42e64c5fccbcca03bd8335b10fd56a7f213f8c" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.539196 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.647954 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39c71ed-9976-4656-a5cd-16b9a340d80e-operator-scripts\") pod \"b39c71ed-9976-4656-a5cd-16b9a340d80e\" (UID: \"b39c71ed-9976-4656-a5cd-16b9a340d80e\") " Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.648493 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39c71ed-9976-4656-a5cd-16b9a340d80e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b39c71ed-9976-4656-a5cd-16b9a340d80e" (UID: "b39c71ed-9976-4656-a5cd-16b9a340d80e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.648632 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjghv\" (UniqueName: \"kubernetes.io/projected/b39c71ed-9976-4656-a5cd-16b9a340d80e-kube-api-access-qjghv\") pod \"b39c71ed-9976-4656-a5cd-16b9a340d80e\" (UID: \"b39c71ed-9976-4656-a5cd-16b9a340d80e\") " Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.649238 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b39c71ed-9976-4656-a5cd-16b9a340d80e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.652821 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39c71ed-9976-4656-a5cd-16b9a340d80e-kube-api-access-qjghv" (OuterVolumeSpecName: "kube-api-access-qjghv") pod "b39c71ed-9976-4656-a5cd-16b9a340d80e" (UID: "b39c71ed-9976-4656-a5cd-16b9a340d80e"). InnerVolumeSpecName "kube-api-access-qjghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.728147 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.738200 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.749984 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa894f99-836d-4f71-853c-a90e0a049382-operator-scripts\") pod \"fa894f99-836d-4f71-853c-a90e0a049382\" (UID: \"fa894f99-836d-4f71-853c-a90e0a049382\") " Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.750068 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzvdc\" (UniqueName: \"kubernetes.io/projected/fa894f99-836d-4f71-853c-a90e0a049382-kube-api-access-fzvdc\") pod \"fa894f99-836d-4f71-853c-a90e0a049382\" (UID: \"fa894f99-836d-4f71-853c-a90e0a049382\") " Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.750139 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5a8837-eb01-432c-bb15-d4f4ff541037-operator-scripts\") pod \"7c5a8837-eb01-432c-bb15-d4f4ff541037\" (UID: \"7c5a8837-eb01-432c-bb15-d4f4ff541037\") " Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.750202 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhvzk\" (UniqueName: \"kubernetes.io/projected/7c5a8837-eb01-432c-bb15-d4f4ff541037-kube-api-access-dhvzk\") pod \"7c5a8837-eb01-432c-bb15-d4f4ff541037\" (UID: \"7c5a8837-eb01-432c-bb15-d4f4ff541037\") " Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.750732 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjghv\" (UniqueName: \"kubernetes.io/projected/b39c71ed-9976-4656-a5cd-16b9a340d80e-kube-api-access-qjghv\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.754557 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa894f99-836d-4f71-853c-a90e0a049382-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa894f99-836d-4f71-853c-a90e0a049382" (UID: "fa894f99-836d-4f71-853c-a90e0a049382"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.756174 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c5a8837-eb01-432c-bb15-d4f4ff541037-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c5a8837-eb01-432c-bb15-d4f4ff541037" (UID: "7c5a8837-eb01-432c-bb15-d4f4ff541037"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.758483 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5a8837-eb01-432c-bb15-d4f4ff541037-kube-api-access-dhvzk" (OuterVolumeSpecName: "kube-api-access-dhvzk") pod "7c5a8837-eb01-432c-bb15-d4f4ff541037" (UID: "7c5a8837-eb01-432c-bb15-d4f4ff541037"). InnerVolumeSpecName "kube-api-access-dhvzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.759050 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa894f99-836d-4f71-853c-a90e0a049382-kube-api-access-fzvdc" (OuterVolumeSpecName: "kube-api-access-fzvdc") pod "fa894f99-836d-4f71-853c-a90e0a049382" (UID: "fa894f99-836d-4f71-853c-a90e0a049382"). InnerVolumeSpecName "kube-api-access-fzvdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.805862 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.851974 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdskl\" (UniqueName: \"kubernetes.io/projected/997aff65-10ec-4070-b64e-34a4e434dde9-kube-api-access-sdskl\") pod \"997aff65-10ec-4070-b64e-34a4e434dde9\" (UID: \"997aff65-10ec-4070-b64e-34a4e434dde9\") " Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.852212 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997aff65-10ec-4070-b64e-34a4e434dde9-operator-scripts\") pod \"997aff65-10ec-4070-b64e-34a4e434dde9\" (UID: \"997aff65-10ec-4070-b64e-34a4e434dde9\") " Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.852737 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhvzk\" (UniqueName: \"kubernetes.io/projected/7c5a8837-eb01-432c-bb15-d4f4ff541037-kube-api-access-dhvzk\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.852766 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa894f99-836d-4f71-853c-a90e0a049382-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.852779 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzvdc\" (UniqueName: \"kubernetes.io/projected/fa894f99-836d-4f71-853c-a90e0a049382-kube-api-access-fzvdc\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.852792 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c5a8837-eb01-432c-bb15-d4f4ff541037-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.852925 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/997aff65-10ec-4070-b64e-34a4e434dde9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "997aff65-10ec-4070-b64e-34a4e434dde9" (UID: "997aff65-10ec-4070-b64e-34a4e434dde9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.854774 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997aff65-10ec-4070-b64e-34a4e434dde9-kube-api-access-sdskl" (OuterVolumeSpecName: "kube-api-access-sdskl") pod "997aff65-10ec-4070-b64e-34a4e434dde9" (UID: "997aff65-10ec-4070-b64e-34a4e434dde9"). InnerVolumeSpecName "kube-api-access-sdskl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.955065 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/997aff65-10ec-4070-b64e-34a4e434dde9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:48 crc kubenswrapper[5049]: I0127 18:31:48.955296 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdskl\" (UniqueName: \"kubernetes.io/projected/997aff65-10ec-4070-b64e-34a4e434dde9-kube-api-access-sdskl\") on node \"crc\" DevicePath \"\"" Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.195891 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6831-account-create-update-8stn7" Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.195900 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6831-account-create-update-8stn7" event={"ID":"997aff65-10ec-4070-b64e-34a4e434dde9","Type":"ContainerDied","Data":"0915920304fee1f9bcf8a740d0d381dd12dd291254eba29d517bee4ea1bca2c1"} Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.196718 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0915920304fee1f9bcf8a740d0d381dd12dd291254eba29d517bee4ea1bca2c1" Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.202215 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" event={"ID":"b39c71ed-9976-4656-a5cd-16b9a340d80e","Type":"ContainerDied","Data":"4a1792a610a4c74fcd31f65b2148048bce19589e231bc5c5616c628070f3136c"} Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.202282 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1792a610a4c74fcd31f65b2148048bce19589e231bc5c5616c628070f3136c" Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.202370 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b44a-account-create-update-4hmwk" Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.206447 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7h4kt" event={"ID":"7c5a8837-eb01-432c-bb15-d4f4ff541037","Type":"ContainerDied","Data":"b37bd91385eb13c0f9f819b6ea4304fbd761f065212ba71de948192071442f6a"} Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.206514 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b37bd91385eb13c0f9f819b6ea4304fbd761f065212ba71de948192071442f6a" Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.206609 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7h4kt" Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.215143 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" event={"ID":"fa894f99-836d-4f71-853c-a90e0a049382","Type":"ContainerDied","Data":"11680cc75ce110d79d649f712820bcb286901213570ca755540dad12f25f6608"} Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.215190 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11680cc75ce110d79d649f712820bcb286901213570ca755540dad12f25f6608" Jan 27 18:31:49 crc kubenswrapper[5049]: I0127 18:31:49.215412 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8dc3-account-create-update-lbgkl" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.750140 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hdq4d"] Jan 27 18:31:54 crc kubenswrapper[5049]: E0127 18:31:54.751223 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="997aff65-10ec-4070-b64e-34a4e434dde9" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751243 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="997aff65-10ec-4070-b64e-34a4e434dde9" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: E0127 18:31:54.751255 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa894f99-836d-4f71-853c-a90e0a049382" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751264 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa894f99-836d-4f71-853c-a90e0a049382" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: E0127 18:31:54.751286 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5a8837-eb01-432c-bb15-d4f4ff541037" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751295 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5a8837-eb01-432c-bb15-d4f4ff541037" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: E0127 18:31:54.751307 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a575262a-6daf-48b8-a260-386421b4d4bc" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751316 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a575262a-6daf-48b8-a260-386421b4d4bc" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: E0127 18:31:54.751355 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea96bc12-3a0b-4587-bcb1-ce13464facd7" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751363 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea96bc12-3a0b-4587-bcb1-ce13464facd7" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: E0127 18:31:54.751380 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39c71ed-9976-4656-a5cd-16b9a340d80e" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751389 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39c71ed-9976-4656-a5cd-16b9a340d80e" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751594 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="997aff65-10ec-4070-b64e-34a4e434dde9" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751607 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa894f99-836d-4f71-853c-a90e0a049382" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751619 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea96bc12-3a0b-4587-bcb1-ce13464facd7" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751636 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a575262a-6daf-48b8-a260-386421b4d4bc" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751660 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39c71ed-9976-4656-a5cd-16b9a340d80e" containerName="mariadb-account-create-update" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.751698 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5a8837-eb01-432c-bb15-d4f4ff541037" containerName="mariadb-database-create" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.752512 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.755806 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.756115 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-l7d2z" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.757327 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.782219 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hdq4d"] Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.853844 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.853926 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-config-data\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.854055 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-scripts\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.854143 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt65j\" (UniqueName: \"kubernetes.io/projected/bf1910ee-4110-4567-b305-013b7a8f6102-kube-api-access-xt65j\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.955552 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.955608 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-config-data\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.955629 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-scripts\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.955648 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt65j\" (UniqueName: \"kubernetes.io/projected/bf1910ee-4110-4567-b305-013b7a8f6102-kube-api-access-xt65j\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.968371 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-scripts\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.968976 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-config-data\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.991483 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:54 crc kubenswrapper[5049]: I0127 18:31:54.992316 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt65j\" (UniqueName: \"kubernetes.io/projected/bf1910ee-4110-4567-b305-013b7a8f6102-kube-api-access-xt65j\") pod \"nova-cell0-conductor-db-sync-hdq4d\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:55 crc kubenswrapper[5049]: I0127 18:31:55.089434 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:31:55 crc kubenswrapper[5049]: I0127 18:31:55.537100 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hdq4d"] Jan 27 18:31:56 crc kubenswrapper[5049]: I0127 18:31:56.290263 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" event={"ID":"bf1910ee-4110-4567-b305-013b7a8f6102","Type":"ContainerStarted","Data":"a76ae62f2b484ca8afe3275e49dfc16b86893736204a5a34add176dd58b7493d"} Jan 27 18:31:56 crc kubenswrapper[5049]: I0127 18:31:56.290708 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" event={"ID":"bf1910ee-4110-4567-b305-013b7a8f6102","Type":"ContainerStarted","Data":"6da3f42c618ba78d1a2c3b9e3d04f3abbe20c3c4845b584be284c8722aab29be"} Jan 27 18:31:56 crc kubenswrapper[5049]: I0127 18:31:56.328175 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" podStartSLOduration=2.3281579949999998 podStartE2EDuration="2.328157995s" podCreationTimestamp="2026-01-27 18:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:31:56.306862914 +0000 UTC m=+5691.405836463" watchObservedRunningTime="2026-01-27 18:31:56.328157995 +0000 UTC m=+5691.427131544" Jan 27 18:32:01 crc kubenswrapper[5049]: I0127 18:32:01.334938 5049 generic.go:334] "Generic (PLEG): container finished" podID="bf1910ee-4110-4567-b305-013b7a8f6102" containerID="a76ae62f2b484ca8afe3275e49dfc16b86893736204a5a34add176dd58b7493d" exitCode=0 Jan 27 18:32:01 crc kubenswrapper[5049]: I0127 18:32:01.335038 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" event={"ID":"bf1910ee-4110-4567-b305-013b7a8f6102","Type":"ContainerDied","Data":"a76ae62f2b484ca8afe3275e49dfc16b86893736204a5a34add176dd58b7493d"} Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.715525 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.903162 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-config-data\") pod \"bf1910ee-4110-4567-b305-013b7a8f6102\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.903242 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-scripts\") pod \"bf1910ee-4110-4567-b305-013b7a8f6102\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.903298 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-combined-ca-bundle\") pod \"bf1910ee-4110-4567-b305-013b7a8f6102\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.903501 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt65j\" (UniqueName: \"kubernetes.io/projected/bf1910ee-4110-4567-b305-013b7a8f6102-kube-api-access-xt65j\") pod \"bf1910ee-4110-4567-b305-013b7a8f6102\" (UID: \"bf1910ee-4110-4567-b305-013b7a8f6102\") " Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.908755 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1910ee-4110-4567-b305-013b7a8f6102-kube-api-access-xt65j" (OuterVolumeSpecName: "kube-api-access-xt65j") pod "bf1910ee-4110-4567-b305-013b7a8f6102" (UID: "bf1910ee-4110-4567-b305-013b7a8f6102"). InnerVolumeSpecName "kube-api-access-xt65j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.911776 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-scripts" (OuterVolumeSpecName: "scripts") pod "bf1910ee-4110-4567-b305-013b7a8f6102" (UID: "bf1910ee-4110-4567-b305-013b7a8f6102"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.928198 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf1910ee-4110-4567-b305-013b7a8f6102" (UID: "bf1910ee-4110-4567-b305-013b7a8f6102"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:02 crc kubenswrapper[5049]: I0127 18:32:02.935118 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-config-data" (OuterVolumeSpecName: "config-data") pod "bf1910ee-4110-4567-b305-013b7a8f6102" (UID: "bf1910ee-4110-4567-b305-013b7a8f6102"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.005273 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt65j\" (UniqueName: \"kubernetes.io/projected/bf1910ee-4110-4567-b305-013b7a8f6102-kube-api-access-xt65j\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.005486 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.005583 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.005641 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1910ee-4110-4567-b305-013b7a8f6102-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.368853 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" event={"ID":"bf1910ee-4110-4567-b305-013b7a8f6102","Type":"ContainerDied","Data":"6da3f42c618ba78d1a2c3b9e3d04f3abbe20c3c4845b584be284c8722aab29be"} Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.368903 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da3f42c618ba78d1a2c3b9e3d04f3abbe20c3c4845b584be284c8722aab29be" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.368973 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hdq4d" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.457240 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:32:03 crc kubenswrapper[5049]: E0127 18:32:03.457813 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1910ee-4110-4567-b305-013b7a8f6102" containerName="nova-cell0-conductor-db-sync" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.457846 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1910ee-4110-4567-b305-013b7a8f6102" containerName="nova-cell0-conductor-db-sync" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.458172 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1910ee-4110-4567-b305-013b7a8f6102" containerName="nova-cell0-conductor-db-sync" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.459358 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.462128 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.462142 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-l7d2z" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.469190 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.617302 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsd2\" (UniqueName: \"kubernetes.io/projected/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-kube-api-access-dgsd2\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.617714 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.617806 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.719401 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.719468 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.719538 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsd2\" (UniqueName: \"kubernetes.io/projected/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-kube-api-access-dgsd2\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.724270 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.724715 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.737158 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsd2\" (UniqueName: \"kubernetes.io/projected/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-kube-api-access-dgsd2\") pod \"nova-cell0-conductor-0\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:03 crc kubenswrapper[5049]: I0127 18:32:03.775235 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:04 crc kubenswrapper[5049]: I0127 18:32:04.226228 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:32:04 crc kubenswrapper[5049]: W0127 18:32:04.228499 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04f936b5_5271_4bdb_89aa_bcbcc6e526ec.slice/crio-e58f5b6c23f2bd87815194a743ef1a2988b8e19faa39c489481d47d1d01d4eee WatchSource:0}: Error finding container e58f5b6c23f2bd87815194a743ef1a2988b8e19faa39c489481d47d1d01d4eee: Status 404 returned error can't find the container with id e58f5b6c23f2bd87815194a743ef1a2988b8e19faa39c489481d47d1d01d4eee Jan 27 18:32:04 crc kubenswrapper[5049]: I0127 18:32:04.381056 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04f936b5-5271-4bdb-89aa-bcbcc6e526ec","Type":"ContainerStarted","Data":"e58f5b6c23f2bd87815194a743ef1a2988b8e19faa39c489481d47d1d01d4eee"} Jan 27 18:32:05 crc kubenswrapper[5049]: I0127 18:32:05.393399 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04f936b5-5271-4bdb-89aa-bcbcc6e526ec","Type":"ContainerStarted","Data":"3b694ea58e80266f34b80393c7c0158d3e8e1d97ffb71542ce50980dcf5b12b0"} Jan 27 18:32:05 crc kubenswrapper[5049]: I0127 18:32:05.393818 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:05 crc kubenswrapper[5049]: I0127 18:32:05.412855 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.412832897 podStartE2EDuration="2.412832897s" podCreationTimestamp="2026-01-27 18:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:05.412538539 +0000 UTC m=+5700.511512098" watchObservedRunningTime="2026-01-27 18:32:05.412832897 +0000 UTC m=+5700.511806446" Jan 27 18:32:13 crc kubenswrapper[5049]: I0127 18:32:13.807381 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.208221 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-s5msl"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.209458 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.212142 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.212472 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.225643 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s5msl"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.314474 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jqnz\" (UniqueName: \"kubernetes.io/projected/1711b5d9-b776-40c9-ad56-389cf4174909-kube-api-access-4jqnz\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.314631 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.315203 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-config-data\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.315428 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-scripts\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.336793 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.338632 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.346749 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.357944 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.404566 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.407601 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.415394 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.417526 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-scripts\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.417578 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jqnz\" (UniqueName: \"kubernetes.io/projected/1711b5d9-b776-40c9-ad56-389cf4174909-kube-api-access-4jqnz\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.417638 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.417724 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-config-data\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.442495 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.445217 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.446793 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.447797 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.447815 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-config-data\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.448201 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-scripts\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.468418 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.525566 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl2cm\" (UniqueName: \"kubernetes.io/projected/82071e1e-3ad0-45e7-9308-7f903a2434e4-kube-api-access-xl2cm\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.525621 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-config-data\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.525663 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-config-data\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.525949 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xkvk\" (UniqueName: \"kubernetes.io/projected/a81d4dd5-cd95-41c0-82e3-4155126658e8-kube-api-access-6xkvk\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.526025 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4893fc66-78ee-454f-a636-e5c7b30ecdb2-logs\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.526047 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81d4dd5-cd95-41c0-82e3-4155126658e8-logs\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.526106 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-config-data\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.526136 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.526185 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jrwb\" (UniqueName: \"kubernetes.io/projected/4893fc66-78ee-454f-a636-e5c7b30ecdb2-kube-api-access-8jrwb\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.526228 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.526271 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.596458 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jqnz\" (UniqueName: \"kubernetes.io/projected/1711b5d9-b776-40c9-ad56-389cf4174909-kube-api-access-4jqnz\") pod \"nova-cell0-cell-mapping-s5msl\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.636931 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-config-data\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637011 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-config-data\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637073 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xkvk\" (UniqueName: \"kubernetes.io/projected/a81d4dd5-cd95-41c0-82e3-4155126658e8-kube-api-access-6xkvk\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637139 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4893fc66-78ee-454f-a636-e5c7b30ecdb2-logs\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637158 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81d4dd5-cd95-41c0-82e3-4155126658e8-logs\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637226 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-config-data\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637261 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637323 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jrwb\" (UniqueName: \"kubernetes.io/projected/4893fc66-78ee-454f-a636-e5c7b30ecdb2-kube-api-access-8jrwb\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637377 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637420 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.637486 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl2cm\" (UniqueName: \"kubernetes.io/projected/82071e1e-3ad0-45e7-9308-7f903a2434e4-kube-api-access-xl2cm\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.640636 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.642059 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.643201 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4893fc66-78ee-454f-a636-e5c7b30ecdb2-logs\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.646035 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81d4dd5-cd95-41c0-82e3-4155126658e8-logs\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.650199 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.655764 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.658650 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-config-data\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.659566 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-config-data\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.662782 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.667792 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-config-data\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.681898 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.692505 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.698563 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jrwb\" (UniqueName: \"kubernetes.io/projected/4893fc66-78ee-454f-a636-e5c7b30ecdb2-kube-api-access-8jrwb\") pod \"nova-api-0\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.728629 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.744832 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xkvk\" (UniqueName: \"kubernetes.io/projected/a81d4dd5-cd95-41c0-82e3-4155126658e8-kube-api-access-6xkvk\") pod \"nova-metadata-0\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.748957 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl2cm\" (UniqueName: \"kubernetes.io/projected/82071e1e-3ad0-45e7-9308-7f903a2434e4-kube-api-access-xl2cm\") pod \"nova-scheduler-0\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.763558 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6db99c5957-nklqk"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.765074 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.838138 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.838953 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.843096 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.843237 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.849594 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55ndg\" (UniqueName: \"kubernetes.io/projected/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-kube-api-access-55ndg\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.865743 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6db99c5957-nklqk"] Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.874099 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.953707 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9jhf\" (UniqueName: \"kubernetes.io/projected/e959befa-4eff-46e2-853c-b057db776837-kube-api-access-s9jhf\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.954040 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-dns-svc\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.954081 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.954166 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-config\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.954187 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-nb\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.954221 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55ndg\" (UniqueName: \"kubernetes.io/projected/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-kube-api-access-55ndg\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.954380 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-sb\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.954411 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.960547 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.960782 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:14 crc kubenswrapper[5049]: I0127 18:32:14.973812 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55ndg\" (UniqueName: \"kubernetes.io/projected/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-kube-api-access-55ndg\") pod \"nova-cell1-novncproxy-0\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.003810 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.055848 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-sb\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.055956 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9jhf\" (UniqueName: \"kubernetes.io/projected/e959befa-4eff-46e2-853c-b057db776837-kube-api-access-s9jhf\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.055989 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-dns-svc\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.056062 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-config\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.056091 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-nb\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.057370 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-nb\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.058061 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-dns-svc\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.058130 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-sb\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.058495 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-config\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.077804 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.081879 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9jhf\" (UniqueName: \"kubernetes.io/projected/e959befa-4eff-46e2-853c-b057db776837-kube-api-access-s9jhf\") pod \"dnsmasq-dns-6db99c5957-nklqk\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.123609 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.475884 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:15 crc kubenswrapper[5049]: W0127 18:32:15.484789 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda81d4dd5_cd95_41c0_82e3_4155126658e8.slice/crio-78d6731b75088b7202b632a021d87f4f1eb93a0e6d2601a78dc32df072667ac1 WatchSource:0}: Error finding container 78d6731b75088b7202b632a021d87f4f1eb93a0e6d2601a78dc32df072667ac1: Status 404 returned error can't find the container with id 78d6731b75088b7202b632a021d87f4f1eb93a0e6d2601a78dc32df072667ac1 Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.520305 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a81d4dd5-cd95-41c0-82e3-4155126658e8","Type":"ContainerStarted","Data":"78d6731b75088b7202b632a021d87f4f1eb93a0e6d2601a78dc32df072667ac1"} Jan 27 18:32:15 crc kubenswrapper[5049]: W0127 18:32:15.565790 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4893fc66_78ee_454f_a636_e5c7b30ecdb2.slice/crio-a36a8b6145cca010871ac02c64a02d4e121312c795f225e6fbf72be4935c796a WatchSource:0}: Error finding container a36a8b6145cca010871ac02c64a02d4e121312c795f225e6fbf72be4935c796a: Status 404 returned error can't find the container with id a36a8b6145cca010871ac02c64a02d4e121312c795f225e6fbf72be4935c796a Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.574463 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.590112 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s5msl"] Jan 27 18:32:15 crc kubenswrapper[5049]: W0127 18:32:15.592611 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1711b5d9_b776_40c9_ad56_389cf4174909.slice/crio-fe0b4588a80d0c6d366b0e2dc743e76bac41f6954d783396465e180066b85712 WatchSource:0}: Error finding container fe0b4588a80d0c6d366b0e2dc743e76bac41f6954d783396465e180066b85712: Status 404 returned error can't find the container with id fe0b4588a80d0c6d366b0e2dc743e76bac41f6954d783396465e180066b85712 Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.661124 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:15 crc kubenswrapper[5049]: W0127 18:32:15.664340 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82071e1e_3ad0_45e7_9308_7f903a2434e4.slice/crio-adc6bb6417928e5c988c031419c2c53728447dd9f0ace5987b1f06528a2e4dde WatchSource:0}: Error finding container adc6bb6417928e5c988c031419c2c53728447dd9f0ace5987b1f06528a2e4dde: Status 404 returned error can't find the container with id adc6bb6417928e5c988c031419c2c53728447dd9f0ace5987b1f06528a2e4dde Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.746529 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pqbr"] Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.748855 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.750793 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.751945 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.756140 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pqbr"] Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.800518 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.814816 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6db99c5957-nklqk"] Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.876891 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg9q4\" (UniqueName: \"kubernetes.io/projected/8349c90d-4c31-46c3-8400-fb68fc6f2810-kube-api-access-jg9q4\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.877939 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-scripts\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.878075 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-config-data\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.880735 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.983734 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.984224 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg9q4\" (UniqueName: \"kubernetes.io/projected/8349c90d-4c31-46c3-8400-fb68fc6f2810-kube-api-access-jg9q4\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.984750 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-scripts\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.985306 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-config-data\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.989307 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-scripts\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.991399 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:15 crc kubenswrapper[5049]: I0127 18:32:15.991960 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-config-data\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.007009 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg9q4\" (UniqueName: \"kubernetes.io/projected/8349c90d-4c31-46c3-8400-fb68fc6f2810-kube-api-access-jg9q4\") pod \"nova-cell1-conductor-db-sync-7pqbr\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.091962 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:16 crc kubenswrapper[5049]: W0127 18:32:16.438910 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8349c90d_4c31_46c3_8400_fb68fc6f2810.slice/crio-14094177d9585c28d1c8f68e551ccdd277efb40fd5c2485f9ef1661d720fa2ad WatchSource:0}: Error finding container 14094177d9585c28d1c8f68e551ccdd277efb40fd5c2485f9ef1661d720fa2ad: Status 404 returned error can't find the container with id 14094177d9585c28d1c8f68e551ccdd277efb40fd5c2485f9ef1661d720fa2ad Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.440531 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pqbr"] Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.538949 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4893fc66-78ee-454f-a636-e5c7b30ecdb2","Type":"ContainerStarted","Data":"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.539006 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4893fc66-78ee-454f-a636-e5c7b30ecdb2","Type":"ContainerStarted","Data":"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.539021 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4893fc66-78ee-454f-a636-e5c7b30ecdb2","Type":"ContainerStarted","Data":"a36a8b6145cca010871ac02c64a02d4e121312c795f225e6fbf72be4935c796a"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.544957 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a81d4dd5-cd95-41c0-82e3-4155126658e8","Type":"ContainerStarted","Data":"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.545002 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a81d4dd5-cd95-41c0-82e3-4155126658e8","Type":"ContainerStarted","Data":"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.555500 5049 generic.go:334] "Generic (PLEG): container finished" podID="e959befa-4eff-46e2-853c-b057db776837" containerID="501867b99c463bb3db6eb7494140fb34be7cb4badf7e5688ee718481e2df9fab" exitCode=0 Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.555567 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" event={"ID":"e959befa-4eff-46e2-853c-b057db776837","Type":"ContainerDied","Data":"501867b99c463bb3db6eb7494140fb34be7cb4badf7e5688ee718481e2df9fab"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.555596 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" event={"ID":"e959befa-4eff-46e2-853c-b057db776837","Type":"ContainerStarted","Data":"c24d19727d5e59447bd671a24c2c0876cdb0bca39eb94c9423ac2c359e218cde"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.558986 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s5msl" event={"ID":"1711b5d9-b776-40c9-ad56-389cf4174909","Type":"ContainerStarted","Data":"e57eb623eee5cd4205aadacbfb109bd00bfa9c5002d90ea0f70cb47a08e187fe"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.559048 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s5msl" event={"ID":"1711b5d9-b776-40c9-ad56-389cf4174909","Type":"ContainerStarted","Data":"fe0b4588a80d0c6d366b0e2dc743e76bac41f6954d783396465e180066b85712"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.561272 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82071e1e-3ad0-45e7-9308-7f903a2434e4","Type":"ContainerStarted","Data":"94f7c858a0f6f10c4171853032a7d965158acfa98f072784827ef3ba2ff33585"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.561297 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82071e1e-3ad0-45e7-9308-7f903a2434e4","Type":"ContainerStarted","Data":"adc6bb6417928e5c988c031419c2c53728447dd9f0ace5987b1f06528a2e4dde"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.566581 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75ca3e57-fb44-41a7-8ce4-a88f81a418a7","Type":"ContainerStarted","Data":"d9bde9784d3f5bc32d7f2e5df6f17cac55342325149ffc02f5805bd6f6a7c95e"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.566630 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75ca3e57-fb44-41a7-8ce4-a88f81a418a7","Type":"ContainerStarted","Data":"1fabe3b823cd8724613649c26f0800ff1892610f506e10e72b0a00c94fec7180"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.571812 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" event={"ID":"8349c90d-4c31-46c3-8400-fb68fc6f2810","Type":"ContainerStarted","Data":"14094177d9585c28d1c8f68e551ccdd277efb40fd5c2485f9ef1661d720fa2ad"} Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.577874 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.577849099 podStartE2EDuration="2.577849099s" podCreationTimestamp="2026-01-27 18:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:16.566104048 +0000 UTC m=+5711.665077597" watchObservedRunningTime="2026-01-27 18:32:16.577849099 +0000 UTC m=+5711.676822648" Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.594329 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-s5msl" podStartSLOduration=2.594303014 podStartE2EDuration="2.594303014s" podCreationTimestamp="2026-01-27 18:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:16.584031384 +0000 UTC m=+5711.683004963" watchObservedRunningTime="2026-01-27 18:32:16.594303014 +0000 UTC m=+5711.693276563" Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.637393 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.63737206 podStartE2EDuration="2.63737206s" podCreationTimestamp="2026-01-27 18:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:16.61896137 +0000 UTC m=+5711.717934919" watchObservedRunningTime="2026-01-27 18:32:16.63737206 +0000 UTC m=+5711.736345609" Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.701813 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.701788639 podStartE2EDuration="2.701788639s" podCreationTimestamp="2026-01-27 18:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:16.668121448 +0000 UTC m=+5711.767095007" watchObservedRunningTime="2026-01-27 18:32:16.701788639 +0000 UTC m=+5711.800762188" Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.718576 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.718551412 podStartE2EDuration="2.718551412s" podCreationTimestamp="2026-01-27 18:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:16.699909926 +0000 UTC m=+5711.798883485" watchObservedRunningTime="2026-01-27 18:32:16.718551412 +0000 UTC m=+5711.817524961" Jan 27 18:32:16 crc kubenswrapper[5049]: I0127 18:32:16.956241 5049 scope.go:117] "RemoveContainer" containerID="406935adc235719e0dbb011b271ee7dc773a95df6b6bd343f3fd77f363491951" Jan 27 18:32:17 crc kubenswrapper[5049]: I0127 18:32:17.581972 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" event={"ID":"8349c90d-4c31-46c3-8400-fb68fc6f2810","Type":"ContainerStarted","Data":"0e373a1884cabeee0e2a0ac78949b8c6aac26c2a4c8df6657681bb8bcbc81848"} Jan 27 18:32:17 crc kubenswrapper[5049]: I0127 18:32:17.584301 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" event={"ID":"e959befa-4eff-46e2-853c-b057db776837","Type":"ContainerStarted","Data":"a1539a1897a85aa3eba9e579e189afcb279f4334361660b64de328ac2ba56494"} Jan 27 18:32:17 crc kubenswrapper[5049]: I0127 18:32:17.603763 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" podStartSLOduration=2.603745004 podStartE2EDuration="2.603745004s" podCreationTimestamp="2026-01-27 18:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:17.603351453 +0000 UTC m=+5712.702325002" watchObservedRunningTime="2026-01-27 18:32:17.603745004 +0000 UTC m=+5712.702718553" Jan 27 18:32:17 crc kubenswrapper[5049]: I0127 18:32:17.626121 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" podStartSLOduration=3.626101285 podStartE2EDuration="3.626101285s" podCreationTimestamp="2026-01-27 18:32:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:17.621026812 +0000 UTC m=+5712.720000361" watchObservedRunningTime="2026-01-27 18:32:17.626101285 +0000 UTC m=+5712.725074834" Jan 27 18:32:18 crc kubenswrapper[5049]: I0127 18:32:18.591818 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:19 crc kubenswrapper[5049]: I0127 18:32:19.840168 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:32:19 crc kubenswrapper[5049]: I0127 18:32:19.840504 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:32:20 crc kubenswrapper[5049]: I0127 18:32:20.004495 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:32:20 crc kubenswrapper[5049]: I0127 18:32:20.078348 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:20 crc kubenswrapper[5049]: I0127 18:32:20.613226 5049 generic.go:334] "Generic (PLEG): container finished" podID="8349c90d-4c31-46c3-8400-fb68fc6f2810" containerID="0e373a1884cabeee0e2a0ac78949b8c6aac26c2a4c8df6657681bb8bcbc81848" exitCode=0 Jan 27 18:32:20 crc kubenswrapper[5049]: I0127 18:32:20.613276 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" event={"ID":"8349c90d-4c31-46c3-8400-fb68fc6f2810","Type":"ContainerDied","Data":"0e373a1884cabeee0e2a0ac78949b8c6aac26c2a4c8df6657681bb8bcbc81848"} Jan 27 18:32:21 crc kubenswrapper[5049]: I0127 18:32:21.644972 5049 generic.go:334] "Generic (PLEG): container finished" podID="1711b5d9-b776-40c9-ad56-389cf4174909" containerID="e57eb623eee5cd4205aadacbfb109bd00bfa9c5002d90ea0f70cb47a08e187fe" exitCode=0 Jan 27 18:32:21 crc kubenswrapper[5049]: I0127 18:32:21.645046 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s5msl" event={"ID":"1711b5d9-b776-40c9-ad56-389cf4174909","Type":"ContainerDied","Data":"e57eb623eee5cd4205aadacbfb109bd00bfa9c5002d90ea0f70cb47a08e187fe"} Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.034627 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.120644 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-combined-ca-bundle\") pod \"8349c90d-4c31-46c3-8400-fb68fc6f2810\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.120857 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-config-data\") pod \"8349c90d-4c31-46c3-8400-fb68fc6f2810\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.120895 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg9q4\" (UniqueName: \"kubernetes.io/projected/8349c90d-4c31-46c3-8400-fb68fc6f2810-kube-api-access-jg9q4\") pod \"8349c90d-4c31-46c3-8400-fb68fc6f2810\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.120983 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-scripts\") pod \"8349c90d-4c31-46c3-8400-fb68fc6f2810\" (UID: \"8349c90d-4c31-46c3-8400-fb68fc6f2810\") " Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.126623 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8349c90d-4c31-46c3-8400-fb68fc6f2810-kube-api-access-jg9q4" (OuterVolumeSpecName: "kube-api-access-jg9q4") pod "8349c90d-4c31-46c3-8400-fb68fc6f2810" (UID: "8349c90d-4c31-46c3-8400-fb68fc6f2810"). InnerVolumeSpecName "kube-api-access-jg9q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.126879 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-scripts" (OuterVolumeSpecName: "scripts") pod "8349c90d-4c31-46c3-8400-fb68fc6f2810" (UID: "8349c90d-4c31-46c3-8400-fb68fc6f2810"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.147931 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8349c90d-4c31-46c3-8400-fb68fc6f2810" (UID: "8349c90d-4c31-46c3-8400-fb68fc6f2810"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.149308 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-config-data" (OuterVolumeSpecName: "config-data") pod "8349c90d-4c31-46c3-8400-fb68fc6f2810" (UID: "8349c90d-4c31-46c3-8400-fb68fc6f2810"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.223260 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.223301 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg9q4\" (UniqueName: \"kubernetes.io/projected/8349c90d-4c31-46c3-8400-fb68fc6f2810-kube-api-access-jg9q4\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.223315 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.223323 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8349c90d-4c31-46c3-8400-fb68fc6f2810-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.676015 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" event={"ID":"8349c90d-4c31-46c3-8400-fb68fc6f2810","Type":"ContainerDied","Data":"14094177d9585c28d1c8f68e551ccdd277efb40fd5c2485f9ef1661d720fa2ad"} Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.676380 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14094177d9585c28d1c8f68e551ccdd277efb40fd5c2485f9ef1661d720fa2ad" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.676078 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7pqbr" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.749862 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:32:22 crc kubenswrapper[5049]: E0127 18:32:22.750258 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8349c90d-4c31-46c3-8400-fb68fc6f2810" containerName="nova-cell1-conductor-db-sync" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.750277 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="8349c90d-4c31-46c3-8400-fb68fc6f2810" containerName="nova-cell1-conductor-db-sync" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.750456 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="8349c90d-4c31-46c3-8400-fb68fc6f2810" containerName="nova-cell1-conductor-db-sync" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.751061 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.753272 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.776135 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.835209 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dnnn\" (UniqueName: \"kubernetes.io/projected/e3a94daa-7472-434d-9137-fe254ac3027e-kube-api-access-5dnnn\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.835314 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.835395 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.937421 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.937591 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dnnn\" (UniqueName: \"kubernetes.io/projected/e3a94daa-7472-434d-9137-fe254ac3027e-kube-api-access-5dnnn\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.937625 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.943986 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.958707 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dnnn\" (UniqueName: \"kubernetes.io/projected/e3a94daa-7472-434d-9137-fe254ac3027e-kube-api-access-5dnnn\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:22 crc kubenswrapper[5049]: I0127 18:32:22.963319 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.078799 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.176219 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.241704 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-scripts\") pod \"1711b5d9-b776-40c9-ad56-389cf4174909\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.241748 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-combined-ca-bundle\") pod \"1711b5d9-b776-40c9-ad56-389cf4174909\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.241810 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-config-data\") pod \"1711b5d9-b776-40c9-ad56-389cf4174909\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.241955 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jqnz\" (UniqueName: \"kubernetes.io/projected/1711b5d9-b776-40c9-ad56-389cf4174909-kube-api-access-4jqnz\") pod \"1711b5d9-b776-40c9-ad56-389cf4174909\" (UID: \"1711b5d9-b776-40c9-ad56-389cf4174909\") " Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.247547 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1711b5d9-b776-40c9-ad56-389cf4174909-kube-api-access-4jqnz" (OuterVolumeSpecName: "kube-api-access-4jqnz") pod "1711b5d9-b776-40c9-ad56-389cf4174909" (UID: "1711b5d9-b776-40c9-ad56-389cf4174909"). InnerVolumeSpecName "kube-api-access-4jqnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.247886 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-scripts" (OuterVolumeSpecName: "scripts") pod "1711b5d9-b776-40c9-ad56-389cf4174909" (UID: "1711b5d9-b776-40c9-ad56-389cf4174909"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.278383 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-config-data" (OuterVolumeSpecName: "config-data") pod "1711b5d9-b776-40c9-ad56-389cf4174909" (UID: "1711b5d9-b776-40c9-ad56-389cf4174909"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.278796 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1711b5d9-b776-40c9-ad56-389cf4174909" (UID: "1711b5d9-b776-40c9-ad56-389cf4174909"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.344759 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.344794 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.344807 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1711b5d9-b776-40c9-ad56-389cf4174909-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.344816 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jqnz\" (UniqueName: \"kubernetes.io/projected/1711b5d9-b776-40c9-ad56-389cf4174909-kube-api-access-4jqnz\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.515444 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.700669 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e3a94daa-7472-434d-9137-fe254ac3027e","Type":"ContainerStarted","Data":"cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4"} Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.702594 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s5msl" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.705049 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.705083 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e3a94daa-7472-434d-9137-fe254ac3027e","Type":"ContainerStarted","Data":"04dffe057c074c6e1cfaa964a5987d1ab214c2f4909081fbafb4cc0d5db2c4c9"} Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.705125 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s5msl" event={"ID":"1711b5d9-b776-40c9-ad56-389cf4174909","Type":"ContainerDied","Data":"fe0b4588a80d0c6d366b0e2dc743e76bac41f6954d783396465e180066b85712"} Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.705142 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe0b4588a80d0c6d366b0e2dc743e76bac41f6954d783396465e180066b85712" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.734931 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.7349140379999999 podStartE2EDuration="1.734914038s" podCreationTimestamp="2026-01-27 18:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:23.721522039 +0000 UTC m=+5718.820495588" watchObservedRunningTime="2026-01-27 18:32:23.734914038 +0000 UTC m=+5718.833887587" Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.881870 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.882154 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerName="nova-api-log" containerID="cri-o://b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f" gracePeriod=30 Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.882264 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerName="nova-api-api" containerID="cri-o://a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2" gracePeriod=30 Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.897217 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.897487 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="82071e1e-3ad0-45e7-9308-7f903a2434e4" containerName="nova-scheduler-scheduler" containerID="cri-o://94f7c858a0f6f10c4171853032a7d965158acfa98f072784827ef3ba2ff33585" gracePeriod=30 Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.918434 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.918748 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerName="nova-metadata-log" containerID="cri-o://fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8" gracePeriod=30 Jan 27 18:32:23 crc kubenswrapper[5049]: I0127 18:32:23.918805 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerName="nova-metadata-metadata" containerID="cri-o://0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab" gracePeriod=30 Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.534661 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.600657 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676115 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jrwb\" (UniqueName: \"kubernetes.io/projected/4893fc66-78ee-454f-a636-e5c7b30ecdb2-kube-api-access-8jrwb\") pod \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676198 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4893fc66-78ee-454f-a636-e5c7b30ecdb2-logs\") pod \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676229 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-combined-ca-bundle\") pod \"a81d4dd5-cd95-41c0-82e3-4155126658e8\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676320 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xkvk\" (UniqueName: \"kubernetes.io/projected/a81d4dd5-cd95-41c0-82e3-4155126658e8-kube-api-access-6xkvk\") pod \"a81d4dd5-cd95-41c0-82e3-4155126658e8\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676378 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-config-data\") pod \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676408 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-config-data\") pod \"a81d4dd5-cd95-41c0-82e3-4155126658e8\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676440 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-combined-ca-bundle\") pod \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\" (UID: \"4893fc66-78ee-454f-a636-e5c7b30ecdb2\") " Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676503 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81d4dd5-cd95-41c0-82e3-4155126658e8-logs\") pod \"a81d4dd5-cd95-41c0-82e3-4155126658e8\" (UID: \"a81d4dd5-cd95-41c0-82e3-4155126658e8\") " Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.676772 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4893fc66-78ee-454f-a636-e5c7b30ecdb2-logs" (OuterVolumeSpecName: "logs") pod "4893fc66-78ee-454f-a636-e5c7b30ecdb2" (UID: "4893fc66-78ee-454f-a636-e5c7b30ecdb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.677054 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4893fc66-78ee-454f-a636-e5c7b30ecdb2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.677403 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81d4dd5-cd95-41c0-82e3-4155126658e8-logs" (OuterVolumeSpecName: "logs") pod "a81d4dd5-cd95-41c0-82e3-4155126658e8" (UID: "a81d4dd5-cd95-41c0-82e3-4155126658e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.682993 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81d4dd5-cd95-41c0-82e3-4155126658e8-kube-api-access-6xkvk" (OuterVolumeSpecName: "kube-api-access-6xkvk") pod "a81d4dd5-cd95-41c0-82e3-4155126658e8" (UID: "a81d4dd5-cd95-41c0-82e3-4155126658e8"). InnerVolumeSpecName "kube-api-access-6xkvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.692007 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4893fc66-78ee-454f-a636-e5c7b30ecdb2-kube-api-access-8jrwb" (OuterVolumeSpecName: "kube-api-access-8jrwb") pod "4893fc66-78ee-454f-a636-e5c7b30ecdb2" (UID: "4893fc66-78ee-454f-a636-e5c7b30ecdb2"). InnerVolumeSpecName "kube-api-access-8jrwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.709745 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a81d4dd5-cd95-41c0-82e3-4155126658e8" (UID: "a81d4dd5-cd95-41c0-82e3-4155126658e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.711118 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4893fc66-78ee-454f-a636-e5c7b30ecdb2" (UID: "4893fc66-78ee-454f-a636-e5c7b30ecdb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.713984 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-config-data" (OuterVolumeSpecName: "config-data") pod "4893fc66-78ee-454f-a636-e5c7b30ecdb2" (UID: "4893fc66-78ee-454f-a636-e5c7b30ecdb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.714332 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.715340 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a81d4dd5-cd95-41c0-82e3-4155126658e8","Type":"ContainerDied","Data":"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab"} Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.715488 5049 scope.go:117] "RemoveContainer" containerID="0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.714791 5049 generic.go:334] "Generic (PLEG): container finished" podID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerID="0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab" exitCode=0 Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.715837 5049 generic.go:334] "Generic (PLEG): container finished" podID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerID="fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8" exitCode=143 Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.715928 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a81d4dd5-cd95-41c0-82e3-4155126658e8","Type":"ContainerDied","Data":"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8"} Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.715964 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a81d4dd5-cd95-41c0-82e3-4155126658e8","Type":"ContainerDied","Data":"78d6731b75088b7202b632a021d87f4f1eb93a0e6d2601a78dc32df072667ac1"} Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.719353 5049 generic.go:334] "Generic (PLEG): container finished" podID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerID="a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2" exitCode=0 Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.719388 5049 generic.go:334] "Generic (PLEG): container finished" podID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerID="b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f" exitCode=143 Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.719770 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4893fc66-78ee-454f-a636-e5c7b30ecdb2","Type":"ContainerDied","Data":"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2"} Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.719815 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4893fc66-78ee-454f-a636-e5c7b30ecdb2","Type":"ContainerDied","Data":"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f"} Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.719827 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4893fc66-78ee-454f-a636-e5c7b30ecdb2","Type":"ContainerDied","Data":"a36a8b6145cca010871ac02c64a02d4e121312c795f225e6fbf72be4935c796a"} Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.720036 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.724102 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-config-data" (OuterVolumeSpecName: "config-data") pod "a81d4dd5-cd95-41c0-82e3-4155126658e8" (UID: "a81d4dd5-cd95-41c0-82e3-4155126658e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.755883 5049 scope.go:117] "RemoveContainer" containerID="fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.762296 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.779151 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jrwb\" (UniqueName: \"kubernetes.io/projected/4893fc66-78ee-454f-a636-e5c7b30ecdb2-kube-api-access-8jrwb\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.779182 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.779195 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xkvk\" (UniqueName: \"kubernetes.io/projected/a81d4dd5-cd95-41c0-82e3-4155126658e8-kube-api-access-6xkvk\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.779206 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.779217 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a81d4dd5-cd95-41c0-82e3-4155126658e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.779227 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4893fc66-78ee-454f-a636-e5c7b30ecdb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.779237 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a81d4dd5-cd95-41c0-82e3-4155126658e8-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.780470 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.781404 5049 scope.go:117] "RemoveContainer" containerID="0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab" Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.781833 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab\": container with ID starting with 0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab not found: ID does not exist" containerID="0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.781923 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab"} err="failed to get container status \"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab\": rpc error: code = NotFound desc = could not find container \"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab\": container with ID starting with 0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab not found: ID does not exist" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.781949 5049 scope.go:117] "RemoveContainer" containerID="fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8" Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.782782 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8\": container with ID starting with fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8 not found: ID does not exist" containerID="fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.782839 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8"} err="failed to get container status \"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8\": rpc error: code = NotFound desc = could not find container \"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8\": container with ID starting with fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8 not found: ID does not exist" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.782903 5049 scope.go:117] "RemoveContainer" containerID="0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.785584 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab"} err="failed to get container status \"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab\": rpc error: code = NotFound desc = could not find container \"0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab\": container with ID starting with 0da8e3691d7bee5eb55f7ae3b1276ef54db35669bccf9a3fc41a208339ad14ab not found: ID does not exist" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.785634 5049 scope.go:117] "RemoveContainer" containerID="fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.786108 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8"} err="failed to get container status \"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8\": rpc error: code = NotFound desc = could not find container \"fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8\": container with ID starting with fb2a9967869aa66d246f3724bbd14072c26967a33215aec4415b184834ea15a8 not found: ID does not exist" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.786172 5049 scope.go:117] "RemoveContainer" containerID="a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.807751 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.808247 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerName="nova-metadata-log" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808266 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerName="nova-metadata-log" Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.808284 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerName="nova-metadata-metadata" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808292 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerName="nova-metadata-metadata" Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.808315 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerName="nova-api-log" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808324 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerName="nova-api-log" Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.808346 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1711b5d9-b776-40c9-ad56-389cf4174909" containerName="nova-manage" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808354 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="1711b5d9-b776-40c9-ad56-389cf4174909" containerName="nova-manage" Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.808365 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerName="nova-api-api" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808374 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerName="nova-api-api" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808584 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerName="nova-api-api" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808604 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerName="nova-metadata-metadata" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808615 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" containerName="nova-metadata-log" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808628 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" containerName="nova-api-log" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.808647 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="1711b5d9-b776-40c9-ad56-389cf4174909" containerName="nova-manage" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.809803 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.813316 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.815878 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.847293 5049 scope.go:117] "RemoveContainer" containerID="b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.868400 5049 scope.go:117] "RemoveContainer" containerID="a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2" Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.868947 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2\": container with ID starting with a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2 not found: ID does not exist" containerID="a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.869018 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2"} err="failed to get container status \"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2\": rpc error: code = NotFound desc = could not find container \"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2\": container with ID starting with a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2 not found: ID does not exist" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.869116 5049 scope.go:117] "RemoveContainer" containerID="b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f" Jan 27 18:32:24 crc kubenswrapper[5049]: E0127 18:32:24.869580 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f\": container with ID starting with b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f not found: ID does not exist" containerID="b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.869638 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f"} err="failed to get container status \"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f\": rpc error: code = NotFound desc = could not find container \"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f\": container with ID starting with b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f not found: ID does not exist" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.869667 5049 scope.go:117] "RemoveContainer" containerID="a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.870156 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2"} err="failed to get container status \"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2\": rpc error: code = NotFound desc = could not find container \"a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2\": container with ID starting with a7e6c1b5861fa4015eec183056b38cfa89bf6f212a86159e8b7024cceba584f2 not found: ID does not exist" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.870187 5049 scope.go:117] "RemoveContainer" containerID="b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.870454 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f"} err="failed to get container status \"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f\": rpc error: code = NotFound desc = could not find container \"b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f\": container with ID starting with b87358aab85170c24662ee779ab8a031273d2f76913e7ced7ae6bc073ad0769f not found: ID does not exist" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.888558 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f945afb5-2a45-470e-904e-e461ac54c9a7-logs\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.888622 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-config-data\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.888720 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.888811 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctpkt\" (UniqueName: \"kubernetes.io/projected/f945afb5-2a45-470e-904e-e461ac54c9a7-kube-api-access-ctpkt\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.990370 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f945afb5-2a45-470e-904e-e461ac54c9a7-logs\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.990422 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-config-data\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.990453 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.990534 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctpkt\" (UniqueName: \"kubernetes.io/projected/f945afb5-2a45-470e-904e-e461ac54c9a7-kube-api-access-ctpkt\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.990927 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f945afb5-2a45-470e-904e-e461ac54c9a7-logs\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.995795 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-config-data\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:24 crc kubenswrapper[5049]: I0127 18:32:24.995012 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.011022 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctpkt\" (UniqueName: \"kubernetes.io/projected/f945afb5-2a45-470e-904e-e461ac54c9a7-kube-api-access-ctpkt\") pod \"nova-api-0\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " pod="openstack/nova-api-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.079711 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.083742 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.093043 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.097328 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.107168 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.108856 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.114348 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.125165 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.128424 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.141287 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.195328 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.196512 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90761aed-d27a-4fba-8246-1bd69b1faa72-logs\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.196577 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-config-data\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.196595 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9lz\" (UniqueName: \"kubernetes.io/projected/90761aed-d27a-4fba-8246-1bd69b1faa72-kube-api-access-5k9lz\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.226621 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b5b8c95d9-xhmzb"] Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.226882 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" podUID="927e7562-d4d8-4472-870e-02473145a9b6" containerName="dnsmasq-dns" containerID="cri-o://ab885909620c4798702b5c33c41b5fbbbfbc52faff7a3a42ac3d69c0ca2f1bea" gracePeriod=10 Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.298641 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.298827 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90761aed-d27a-4fba-8246-1bd69b1faa72-logs\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.298886 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-config-data\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.298916 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k9lz\" (UniqueName: \"kubernetes.io/projected/90761aed-d27a-4fba-8246-1bd69b1faa72-kube-api-access-5k9lz\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.299734 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90761aed-d27a-4fba-8246-1bd69b1faa72-logs\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.307474 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.307714 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-config-data\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.320320 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k9lz\" (UniqueName: \"kubernetes.io/projected/90761aed-d27a-4fba-8246-1bd69b1faa72-kube-api-access-5k9lz\") pod \"nova-metadata-0\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.429230 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.724478 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4893fc66-78ee-454f-a636-e5c7b30ecdb2" path="/var/lib/kubelet/pods/4893fc66-78ee-454f-a636-e5c7b30ecdb2/volumes" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.725381 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81d4dd5-cd95-41c0-82e3-4155126658e8" path="/var/lib/kubelet/pods/a81d4dd5-cd95-41c0-82e3-4155126658e8/volumes" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.766140 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" event={"ID":"927e7562-d4d8-4472-870e-02473145a9b6","Type":"ContainerDied","Data":"ab885909620c4798702b5c33c41b5fbbbfbc52faff7a3a42ac3d69c0ca2f1bea"} Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.766094 5049 generic.go:334] "Generic (PLEG): container finished" podID="927e7562-d4d8-4472-870e-02473145a9b6" containerID="ab885909620c4798702b5c33c41b5fbbbfbc52faff7a3a42ac3d69c0ca2f1bea" exitCode=0 Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.807166 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.813566 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:25 crc kubenswrapper[5049]: I0127 18:32:25.938741 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.016641 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-nb\") pod \"927e7562-d4d8-4472-870e-02473145a9b6\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.017483 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92g4s\" (UniqueName: \"kubernetes.io/projected/927e7562-d4d8-4472-870e-02473145a9b6-kube-api-access-92g4s\") pod \"927e7562-d4d8-4472-870e-02473145a9b6\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.017819 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-dns-svc\") pod \"927e7562-d4d8-4472-870e-02473145a9b6\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.017918 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-config\") pod \"927e7562-d4d8-4472-870e-02473145a9b6\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.017963 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-sb\") pod \"927e7562-d4d8-4472-870e-02473145a9b6\" (UID: \"927e7562-d4d8-4472-870e-02473145a9b6\") " Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.026908 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927e7562-d4d8-4472-870e-02473145a9b6-kube-api-access-92g4s" (OuterVolumeSpecName: "kube-api-access-92g4s") pod "927e7562-d4d8-4472-870e-02473145a9b6" (UID: "927e7562-d4d8-4472-870e-02473145a9b6"). InnerVolumeSpecName "kube-api-access-92g4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.116702 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "927e7562-d4d8-4472-870e-02473145a9b6" (UID: "927e7562-d4d8-4472-870e-02473145a9b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.120924 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92g4s\" (UniqueName: \"kubernetes.io/projected/927e7562-d4d8-4472-870e-02473145a9b6-kube-api-access-92g4s\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.120964 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.124302 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "927e7562-d4d8-4472-870e-02473145a9b6" (UID: "927e7562-d4d8-4472-870e-02473145a9b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.143019 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "927e7562-d4d8-4472-870e-02473145a9b6" (UID: "927e7562-d4d8-4472-870e-02473145a9b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.179341 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-config" (OuterVolumeSpecName: "config") pod "927e7562-d4d8-4472-870e-02473145a9b6" (UID: "927e7562-d4d8-4472-870e-02473145a9b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.196401 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.223568 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.223604 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.223617 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/927e7562-d4d8-4472-870e-02473145a9b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.806204 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" event={"ID":"927e7562-d4d8-4472-870e-02473145a9b6","Type":"ContainerDied","Data":"2deb9010f83f6966958ff256c46b35f204ffd54220d45e486510a74913396a11"} Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.806565 5049 scope.go:117] "RemoveContainer" containerID="ab885909620c4798702b5c33c41b5fbbbfbc52faff7a3a42ac3d69c0ca2f1bea" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.806233 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5b8c95d9-xhmzb" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.818655 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f945afb5-2a45-470e-904e-e461ac54c9a7","Type":"ContainerStarted","Data":"5458297a0ff91d47b4237c2b12d49887ae08d8ef7681fe111a6b9441ff9957cd"} Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.818724 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f945afb5-2a45-470e-904e-e461ac54c9a7","Type":"ContainerStarted","Data":"58b3092c34544b9e4f8f7e9636381ea48186f7a8d0d779a2e1fcb35a0fa5d3ff"} Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.818737 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f945afb5-2a45-470e-904e-e461ac54c9a7","Type":"ContainerStarted","Data":"8f1f91d4c7c26680fb601ef2b80eca728ecb132069b15a7a5e693035cc5c849e"} Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.822899 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90761aed-d27a-4fba-8246-1bd69b1faa72","Type":"ContainerStarted","Data":"8cd461ad9bf1926064834ac21cc60bdadbfce56ec6294436afbada274aa700fe"} Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.822933 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90761aed-d27a-4fba-8246-1bd69b1faa72","Type":"ContainerStarted","Data":"93a6a35a473d2136aba8dae49bc1a35769f481049393cd59176a1710a5714729"} Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.822944 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90761aed-d27a-4fba-8246-1bd69b1faa72","Type":"ContainerStarted","Data":"0ca893938c57d96f2baa2ba0e3036c297580d99c7cc9a79f67a749c3ea9e81d8"} Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.827584 5049 scope.go:117] "RemoveContainer" containerID="4602dba74ca49afd5790026b0563d4e3682bbe94f5d2fb4f063cb51bb6dd55bd" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.848700 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.84866545 podStartE2EDuration="2.84866545s" podCreationTimestamp="2026-01-27 18:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:26.838561855 +0000 UTC m=+5721.937535414" watchObservedRunningTime="2026-01-27 18:32:26.84866545 +0000 UTC m=+5721.947638999" Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.866703 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b5b8c95d9-xhmzb"] Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.875817 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b5b8c95d9-xhmzb"] Jan 27 18:32:26 crc kubenswrapper[5049]: I0127 18:32:26.879955 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.879932493 podStartE2EDuration="1.879932493s" podCreationTimestamp="2026-01-27 18:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:26.871886526 +0000 UTC m=+5721.970860085" watchObservedRunningTime="2026-01-27 18:32:26.879932493 +0000 UTC m=+5721.978906042" Jan 27 18:32:27 crc kubenswrapper[5049]: I0127 18:32:27.657728 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="927e7562-d4d8-4472-870e-02473145a9b6" path="/var/lib/kubelet/pods/927e7562-d4d8-4472-870e-02473145a9b6/volumes" Jan 27 18:32:27 crc kubenswrapper[5049]: I0127 18:32:27.841171 5049 generic.go:334] "Generic (PLEG): container finished" podID="82071e1e-3ad0-45e7-9308-7f903a2434e4" containerID="94f7c858a0f6f10c4171853032a7d965158acfa98f072784827ef3ba2ff33585" exitCode=0 Jan 27 18:32:27 crc kubenswrapper[5049]: I0127 18:32:27.841299 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82071e1e-3ad0-45e7-9308-7f903a2434e4","Type":"ContainerDied","Data":"94f7c858a0f6f10c4171853032a7d965158acfa98f072784827ef3ba2ff33585"} Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.043167 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.057084 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle\") pod \"82071e1e-3ad0-45e7-9308-7f903a2434e4\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.057255 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl2cm\" (UniqueName: \"kubernetes.io/projected/82071e1e-3ad0-45e7-9308-7f903a2434e4-kube-api-access-xl2cm\") pod \"82071e1e-3ad0-45e7-9308-7f903a2434e4\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.057314 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-config-data\") pod \"82071e1e-3ad0-45e7-9308-7f903a2434e4\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.070745 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82071e1e-3ad0-45e7-9308-7f903a2434e4-kube-api-access-xl2cm" (OuterVolumeSpecName: "kube-api-access-xl2cm") pod "82071e1e-3ad0-45e7-9308-7f903a2434e4" (UID: "82071e1e-3ad0-45e7-9308-7f903a2434e4"). InnerVolumeSpecName "kube-api-access-xl2cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:28 crc kubenswrapper[5049]: E0127 18:32:28.084726 5049 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle podName:82071e1e-3ad0-45e7-9308-7f903a2434e4 nodeName:}" failed. No retries permitted until 2026-01-27 18:32:28.584694247 +0000 UTC m=+5723.683667806 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle") pod "82071e1e-3ad0-45e7-9308-7f903a2434e4" (UID: "82071e1e-3ad0-45e7-9308-7f903a2434e4") : error deleting /var/lib/kubelet/pods/82071e1e-3ad0-45e7-9308-7f903a2434e4/volume-subpaths: remove /var/lib/kubelet/pods/82071e1e-3ad0-45e7-9308-7f903a2434e4/volume-subpaths: no such file or directory Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.088156 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-config-data" (OuterVolumeSpecName: "config-data") pod "82071e1e-3ad0-45e7-9308-7f903a2434e4" (UID: "82071e1e-3ad0-45e7-9308-7f903a2434e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.111171 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.160234 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl2cm\" (UniqueName: \"kubernetes.io/projected/82071e1e-3ad0-45e7-9308-7f903a2434e4-kube-api-access-xl2cm\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.160268 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.547372 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-nmjjt"] Jan 27 18:32:28 crc kubenswrapper[5049]: E0127 18:32:28.547753 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927e7562-d4d8-4472-870e-02473145a9b6" containerName="init" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.547771 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="927e7562-d4d8-4472-870e-02473145a9b6" containerName="init" Jan 27 18:32:28 crc kubenswrapper[5049]: E0127 18:32:28.547804 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82071e1e-3ad0-45e7-9308-7f903a2434e4" containerName="nova-scheduler-scheduler" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.547810 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="82071e1e-3ad0-45e7-9308-7f903a2434e4" containerName="nova-scheduler-scheduler" Jan 27 18:32:28 crc kubenswrapper[5049]: E0127 18:32:28.547821 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927e7562-d4d8-4472-870e-02473145a9b6" containerName="dnsmasq-dns" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.547826 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="927e7562-d4d8-4472-870e-02473145a9b6" containerName="dnsmasq-dns" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.547989 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="82071e1e-3ad0-45e7-9308-7f903a2434e4" containerName="nova-scheduler-scheduler" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.548025 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="927e7562-d4d8-4472-870e-02473145a9b6" containerName="dnsmasq-dns" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.548579 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.554926 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.555279 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.560751 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nmjjt"] Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.567863 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-config-data\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.568093 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdd2r\" (UniqueName: \"kubernetes.io/projected/7d58451c-87da-448a-8923-a6c89915ef90-kube-api-access-wdd2r\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.568577 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-scripts\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.568734 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.670433 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle\") pod \"82071e1e-3ad0-45e7-9308-7f903a2434e4\" (UID: \"82071e1e-3ad0-45e7-9308-7f903a2434e4\") " Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.670819 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-scripts\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.670884 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.671110 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-config-data\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.671134 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdd2r\" (UniqueName: \"kubernetes.io/projected/7d58451c-87da-448a-8923-a6c89915ef90-kube-api-access-wdd2r\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.675252 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-scripts\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.675381 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82071e1e-3ad0-45e7-9308-7f903a2434e4" (UID: "82071e1e-3ad0-45e7-9308-7f903a2434e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.685370 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.685390 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-config-data\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.688856 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdd2r\" (UniqueName: \"kubernetes.io/projected/7d58451c-87da-448a-8923-a6c89915ef90-kube-api-access-wdd2r\") pod \"nova-cell1-cell-mapping-nmjjt\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.773012 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82071e1e-3ad0-45e7-9308-7f903a2434e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.853620 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"82071e1e-3ad0-45e7-9308-7f903a2434e4","Type":"ContainerDied","Data":"adc6bb6417928e5c988c031419c2c53728447dd9f0ace5987b1f06528a2e4dde"} Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.853683 5049 scope.go:117] "RemoveContainer" containerID="94f7c858a0f6f10c4171853032a7d965158acfa98f072784827ef3ba2ff33585" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.853745 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.891664 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.899381 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.916247 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.917727 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.919759 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.924179 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.933780 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.975739 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-config-data\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.976055 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:28 crc kubenswrapper[5049]: I0127 18:32:28.976186 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pkw\" (UniqueName: \"kubernetes.io/projected/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-kube-api-access-r2pkw\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.078206 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-config-data\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.078632 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.079179 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2pkw\" (UniqueName: \"kubernetes.io/projected/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-kube-api-access-r2pkw\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.085875 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.098802 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-config-data\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.101318 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2pkw\" (UniqueName: \"kubernetes.io/projected/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-kube-api-access-r2pkw\") pod \"nova-scheduler-0\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.242345 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.396062 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-nmjjt"] Jan 27 18:32:29 crc kubenswrapper[5049]: W0127 18:32:29.402008 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d58451c_87da_448a_8923_a6c89915ef90.slice/crio-8311a1c9321df3b88ae99529a0ee14e6be3fd656f57c9146ebf0da1779a3c94c WatchSource:0}: Error finding container 8311a1c9321df3b88ae99529a0ee14e6be3fd656f57c9146ebf0da1779a3c94c: Status 404 returned error can't find the container with id 8311a1c9321df3b88ae99529a0ee14e6be3fd656f57c9146ebf0da1779a3c94c Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.656859 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82071e1e-3ad0-45e7-9308-7f903a2434e4" path="/var/lib/kubelet/pods/82071e1e-3ad0-45e7-9308-7f903a2434e4/volumes" Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.709499 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:29 crc kubenswrapper[5049]: W0127 18:32:29.709943 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf80a74d_27fc_4c92_b19c_f71f1bd8dba5.slice/crio-046f6fcb29e56dd0579566eff1b95a05f1b257cf2a2545fb0b3fdcffc68e6bb6 WatchSource:0}: Error finding container 046f6fcb29e56dd0579566eff1b95a05f1b257cf2a2545fb0b3fdcffc68e6bb6: Status 404 returned error can't find the container with id 046f6fcb29e56dd0579566eff1b95a05f1b257cf2a2545fb0b3fdcffc68e6bb6 Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.863969 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nmjjt" event={"ID":"7d58451c-87da-448a-8923-a6c89915ef90","Type":"ContainerStarted","Data":"0ca8c9b5f65773a92b01cbf3525e1b4344752b0501c7310325360ff93787dc3a"} Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.864025 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nmjjt" event={"ID":"7d58451c-87da-448a-8923-a6c89915ef90","Type":"ContainerStarted","Data":"8311a1c9321df3b88ae99529a0ee14e6be3fd656f57c9146ebf0da1779a3c94c"} Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.865150 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af80a74d-27fc-4c92-b19c-f71f1bd8dba5","Type":"ContainerStarted","Data":"046f6fcb29e56dd0579566eff1b95a05f1b257cf2a2545fb0b3fdcffc68e6bb6"} Jan 27 18:32:29 crc kubenswrapper[5049]: I0127 18:32:29.883374 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-nmjjt" podStartSLOduration=1.8833568 podStartE2EDuration="1.8833568s" podCreationTimestamp="2026-01-27 18:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:29.878135832 +0000 UTC m=+5724.977109381" watchObservedRunningTime="2026-01-27 18:32:29.8833568 +0000 UTC m=+5724.982330349" Jan 27 18:32:30 crc kubenswrapper[5049]: I0127 18:32:30.430370 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:32:30 crc kubenswrapper[5049]: I0127 18:32:30.430425 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:32:30 crc kubenswrapper[5049]: I0127 18:32:30.880491 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af80a74d-27fc-4c92-b19c-f71f1bd8dba5","Type":"ContainerStarted","Data":"20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62"} Jan 27 18:32:30 crc kubenswrapper[5049]: I0127 18:32:30.905767 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.905746314 podStartE2EDuration="2.905746314s" podCreationTimestamp="2026-01-27 18:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:30.89956504 +0000 UTC m=+5725.998538589" watchObservedRunningTime="2026-01-27 18:32:30.905746314 +0000 UTC m=+5726.004719873" Jan 27 18:32:34 crc kubenswrapper[5049]: I0127 18:32:34.242804 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:32:34 crc kubenswrapper[5049]: I0127 18:32:34.923542 5049 generic.go:334] "Generic (PLEG): container finished" podID="7d58451c-87da-448a-8923-a6c89915ef90" containerID="0ca8c9b5f65773a92b01cbf3525e1b4344752b0501c7310325360ff93787dc3a" exitCode=0 Jan 27 18:32:34 crc kubenswrapper[5049]: I0127 18:32:34.923588 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nmjjt" event={"ID":"7d58451c-87da-448a-8923-a6c89915ef90","Type":"ContainerDied","Data":"0ca8c9b5f65773a92b01cbf3525e1b4344752b0501c7310325360ff93787dc3a"} Jan 27 18:32:35 crc kubenswrapper[5049]: I0127 18:32:35.142730 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:32:35 crc kubenswrapper[5049]: I0127 18:32:35.142782 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:32:35 crc kubenswrapper[5049]: I0127 18:32:35.430006 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:32:35 crc kubenswrapper[5049]: I0127 18:32:35.430065 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.233483 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.70:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.233861 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.70:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.511107 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.532878 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.71:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.532918 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.71:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.650927 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-combined-ca-bundle\") pod \"7d58451c-87da-448a-8923-a6c89915ef90\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.651103 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-config-data\") pod \"7d58451c-87da-448a-8923-a6c89915ef90\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.651269 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdd2r\" (UniqueName: \"kubernetes.io/projected/7d58451c-87da-448a-8923-a6c89915ef90-kube-api-access-wdd2r\") pod \"7d58451c-87da-448a-8923-a6c89915ef90\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.651306 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-scripts\") pod \"7d58451c-87da-448a-8923-a6c89915ef90\" (UID: \"7d58451c-87da-448a-8923-a6c89915ef90\") " Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.658297 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d58451c-87da-448a-8923-a6c89915ef90-kube-api-access-wdd2r" (OuterVolumeSpecName: "kube-api-access-wdd2r") pod "7d58451c-87da-448a-8923-a6c89915ef90" (UID: "7d58451c-87da-448a-8923-a6c89915ef90"). InnerVolumeSpecName "kube-api-access-wdd2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.668404 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-scripts" (OuterVolumeSpecName: "scripts") pod "7d58451c-87da-448a-8923-a6c89915ef90" (UID: "7d58451c-87da-448a-8923-a6c89915ef90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.699627 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-config-data" (OuterVolumeSpecName: "config-data") pod "7d58451c-87da-448a-8923-a6c89915ef90" (UID: "7d58451c-87da-448a-8923-a6c89915ef90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.700832 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d58451c-87da-448a-8923-a6c89915ef90" (UID: "7d58451c-87da-448a-8923-a6c89915ef90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.753211 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.753487 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.753571 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdd2r\" (UniqueName: \"kubernetes.io/projected/7d58451c-87da-448a-8923-a6c89915ef90-kube-api-access-wdd2r\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.753636 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d58451c-87da-448a-8923-a6c89915ef90-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.941094 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-nmjjt" event={"ID":"7d58451c-87da-448a-8923-a6c89915ef90","Type":"ContainerDied","Data":"8311a1c9321df3b88ae99529a0ee14e6be3fd656f57c9146ebf0da1779a3c94c"} Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.941136 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8311a1c9321df3b88ae99529a0ee14e6be3fd656f57c9146ebf0da1779a3c94c" Jan 27 18:32:36 crc kubenswrapper[5049]: I0127 18:32:36.941145 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-nmjjt" Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.168245 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.168554 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-log" containerID="cri-o://58b3092c34544b9e4f8f7e9636381ea48186f7a8d0d779a2e1fcb35a0fa5d3ff" gracePeriod=30 Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.168738 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-api" containerID="cri-o://5458297a0ff91d47b4237c2b12d49887ae08d8ef7681fe111a6b9441ff9957cd" gracePeriod=30 Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.177137 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.177353 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="af80a74d-27fc-4c92-b19c-f71f1bd8dba5" containerName="nova-scheduler-scheduler" containerID="cri-o://20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62" gracePeriod=30 Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.214187 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.214463 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-log" containerID="cri-o://93a6a35a473d2136aba8dae49bc1a35769f481049393cd59176a1710a5714729" gracePeriod=30 Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.214554 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-metadata" containerID="cri-o://8cd461ad9bf1926064834ac21cc60bdadbfce56ec6294436afbada274aa700fe" gracePeriod=30 Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.957000 5049 generic.go:334] "Generic (PLEG): container finished" podID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerID="58b3092c34544b9e4f8f7e9636381ea48186f7a8d0d779a2e1fcb35a0fa5d3ff" exitCode=143 Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.957082 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f945afb5-2a45-470e-904e-e461ac54c9a7","Type":"ContainerDied","Data":"58b3092c34544b9e4f8f7e9636381ea48186f7a8d0d779a2e1fcb35a0fa5d3ff"} Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.963457 5049 generic.go:334] "Generic (PLEG): container finished" podID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerID="93a6a35a473d2136aba8dae49bc1a35769f481049393cd59176a1710a5714729" exitCode=143 Jan 27 18:32:37 crc kubenswrapper[5049]: I0127 18:32:37.963517 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90761aed-d27a-4fba-8246-1bd69b1faa72","Type":"ContainerDied","Data":"93a6a35a473d2136aba8dae49bc1a35769f481049393cd59176a1710a5714729"} Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:41.882851 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:41.950952 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2pkw\" (UniqueName: \"kubernetes.io/projected/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-kube-api-access-r2pkw\") pod \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:41.951069 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-combined-ca-bundle\") pod \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:41.951163 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-config-data\") pod \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\" (UID: \"af80a74d-27fc-4c92-b19c-f71f1bd8dba5\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:41.956826 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-kube-api-access-r2pkw" (OuterVolumeSpecName: "kube-api-access-r2pkw") pod "af80a74d-27fc-4c92-b19c-f71f1bd8dba5" (UID: "af80a74d-27fc-4c92-b19c-f71f1bd8dba5"). InnerVolumeSpecName "kube-api-access-r2pkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:41.978377 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-config-data" (OuterVolumeSpecName: "config-data") pod "af80a74d-27fc-4c92-b19c-f71f1bd8dba5" (UID: "af80a74d-27fc-4c92-b19c-f71f1bd8dba5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:41.982290 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af80a74d-27fc-4c92-b19c-f71f1bd8dba5" (UID: "af80a74d-27fc-4c92-b19c-f71f1bd8dba5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.001263 5049 generic.go:334] "Generic (PLEG): container finished" podID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerID="5458297a0ff91d47b4237c2b12d49887ae08d8ef7681fe111a6b9441ff9957cd" exitCode=0 Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.001340 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f945afb5-2a45-470e-904e-e461ac54c9a7","Type":"ContainerDied","Data":"5458297a0ff91d47b4237c2b12d49887ae08d8ef7681fe111a6b9441ff9957cd"} Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.003892 5049 generic.go:334] "Generic (PLEG): container finished" podID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerID="8cd461ad9bf1926064834ac21cc60bdadbfce56ec6294436afbada274aa700fe" exitCode=0 Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.003963 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90761aed-d27a-4fba-8246-1bd69b1faa72","Type":"ContainerDied","Data":"8cd461ad9bf1926064834ac21cc60bdadbfce56ec6294436afbada274aa700fe"} Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.005589 5049 generic.go:334] "Generic (PLEG): container finished" podID="af80a74d-27fc-4c92-b19c-f71f1bd8dba5" containerID="20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62" exitCode=0 Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.005624 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.005629 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af80a74d-27fc-4c92-b19c-f71f1bd8dba5","Type":"ContainerDied","Data":"20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62"} Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.005655 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af80a74d-27fc-4c92-b19c-f71f1bd8dba5","Type":"ContainerDied","Data":"046f6fcb29e56dd0579566eff1b95a05f1b257cf2a2545fb0b3fdcffc68e6bb6"} Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.005698 5049 scope.go:117] "RemoveContainer" containerID="20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.039880 5049 scope.go:117] "RemoveContainer" containerID="20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62" Jan 27 18:32:42 crc kubenswrapper[5049]: E0127 18:32:42.041028 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62\": container with ID starting with 20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62 not found: ID does not exist" containerID="20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.041076 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62"} err="failed to get container status \"20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62\": rpc error: code = NotFound desc = could not find container \"20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62\": container with ID starting with 20748eb69043377974e923970531debc6dbd13a48b6289cb3e173dc367563f62 not found: ID does not exist" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.053404 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.053433 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2pkw\" (UniqueName: \"kubernetes.io/projected/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-kube-api-access-r2pkw\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.053442 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af80a74d-27fc-4c92-b19c-f71f1bd8dba5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.066484 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.082366 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.091243 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:42 crc kubenswrapper[5049]: E0127 18:32:42.091757 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d58451c-87da-448a-8923-a6c89915ef90" containerName="nova-manage" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.091772 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d58451c-87da-448a-8923-a6c89915ef90" containerName="nova-manage" Jan 27 18:32:42 crc kubenswrapper[5049]: E0127 18:32:42.091783 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af80a74d-27fc-4c92-b19c-f71f1bd8dba5" containerName="nova-scheduler-scheduler" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.091792 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="af80a74d-27fc-4c92-b19c-f71f1bd8dba5" containerName="nova-scheduler-scheduler" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.091984 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="af80a74d-27fc-4c92-b19c-f71f1bd8dba5" containerName="nova-scheduler-scheduler" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.092007 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d58451c-87da-448a-8923-a6c89915ef90" containerName="nova-manage" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.092801 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.096161 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.114717 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.155087 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.155147 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-config-data\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.155538 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5mjq\" (UniqueName: \"kubernetes.io/projected/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-kube-api-access-f5mjq\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.257080 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5mjq\" (UniqueName: \"kubernetes.io/projected/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-kube-api-access-f5mjq\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.257165 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.257202 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-config-data\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.260420 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.260888 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-config-data\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.277902 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5mjq\" (UniqueName: \"kubernetes.io/projected/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-kube-api-access-f5mjq\") pod \"nova-scheduler-0\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.416065 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.652597 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.664957 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.766580 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90761aed-d27a-4fba-8246-1bd69b1faa72-logs\") pod \"90761aed-d27a-4fba-8246-1bd69b1faa72\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.766640 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-config-data\") pod \"90761aed-d27a-4fba-8246-1bd69b1faa72\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.766683 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-combined-ca-bundle\") pod \"f945afb5-2a45-470e-904e-e461ac54c9a7\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.766819 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctpkt\" (UniqueName: \"kubernetes.io/projected/f945afb5-2a45-470e-904e-e461ac54c9a7-kube-api-access-ctpkt\") pod \"f945afb5-2a45-470e-904e-e461ac54c9a7\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.766878 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-combined-ca-bundle\") pod \"90761aed-d27a-4fba-8246-1bd69b1faa72\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.766959 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k9lz\" (UniqueName: \"kubernetes.io/projected/90761aed-d27a-4fba-8246-1bd69b1faa72-kube-api-access-5k9lz\") pod \"90761aed-d27a-4fba-8246-1bd69b1faa72\" (UID: \"90761aed-d27a-4fba-8246-1bd69b1faa72\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.766998 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-config-data\") pod \"f945afb5-2a45-470e-904e-e461ac54c9a7\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.767050 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f945afb5-2a45-470e-904e-e461ac54c9a7-logs\") pod \"f945afb5-2a45-470e-904e-e461ac54c9a7\" (UID: \"f945afb5-2a45-470e-904e-e461ac54c9a7\") " Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.767356 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90761aed-d27a-4fba-8246-1bd69b1faa72-logs" (OuterVolumeSpecName: "logs") pod "90761aed-d27a-4fba-8246-1bd69b1faa72" (UID: "90761aed-d27a-4fba-8246-1bd69b1faa72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.767806 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90761aed-d27a-4fba-8246-1bd69b1faa72-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.768290 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f945afb5-2a45-470e-904e-e461ac54c9a7-logs" (OuterVolumeSpecName: "logs") pod "f945afb5-2a45-470e-904e-e461ac54c9a7" (UID: "f945afb5-2a45-470e-904e-e461ac54c9a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.771912 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90761aed-d27a-4fba-8246-1bd69b1faa72-kube-api-access-5k9lz" (OuterVolumeSpecName: "kube-api-access-5k9lz") pod "90761aed-d27a-4fba-8246-1bd69b1faa72" (UID: "90761aed-d27a-4fba-8246-1bd69b1faa72"). InnerVolumeSpecName "kube-api-access-5k9lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.774883 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f945afb5-2a45-470e-904e-e461ac54c9a7-kube-api-access-ctpkt" (OuterVolumeSpecName: "kube-api-access-ctpkt") pod "f945afb5-2a45-470e-904e-e461ac54c9a7" (UID: "f945afb5-2a45-470e-904e-e461ac54c9a7"). InnerVolumeSpecName "kube-api-access-ctpkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.791714 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-config-data" (OuterVolumeSpecName: "config-data") pod "90761aed-d27a-4fba-8246-1bd69b1faa72" (UID: "90761aed-d27a-4fba-8246-1bd69b1faa72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.797526 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90761aed-d27a-4fba-8246-1bd69b1faa72" (UID: "90761aed-d27a-4fba-8246-1bd69b1faa72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.800766 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f945afb5-2a45-470e-904e-e461ac54c9a7" (UID: "f945afb5-2a45-470e-904e-e461ac54c9a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.815079 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-config-data" (OuterVolumeSpecName: "config-data") pod "f945afb5-2a45-470e-904e-e461ac54c9a7" (UID: "f945afb5-2a45-470e-904e-e461ac54c9a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.869889 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctpkt\" (UniqueName: \"kubernetes.io/projected/f945afb5-2a45-470e-904e-e461ac54c9a7-kube-api-access-ctpkt\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.870017 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.870033 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k9lz\" (UniqueName: \"kubernetes.io/projected/90761aed-d27a-4fba-8246-1bd69b1faa72-kube-api-access-5k9lz\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.870045 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.870058 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f945afb5-2a45-470e-904e-e461ac54c9a7-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.870070 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90761aed-d27a-4fba-8246-1bd69b1faa72-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.870080 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f945afb5-2a45-470e-904e-e461ac54c9a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:32:42 crc kubenswrapper[5049]: I0127 18:32:42.890329 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:32:42 crc kubenswrapper[5049]: W0127 18:32:42.901343 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1016fc2_6824_4d3e_a31b_d4dd3617cc4d.slice/crio-a4d6f98353487cfc1ed0218036e8ac7bff16e6b6d8e25bc0ddf5cc53b3b88ebd WatchSource:0}: Error finding container a4d6f98353487cfc1ed0218036e8ac7bff16e6b6d8e25bc0ddf5cc53b3b88ebd: Status 404 returned error can't find the container with id a4d6f98353487cfc1ed0218036e8ac7bff16e6b6d8e25bc0ddf5cc53b3b88ebd Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.023900 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.023909 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"90761aed-d27a-4fba-8246-1bd69b1faa72","Type":"ContainerDied","Data":"0ca893938c57d96f2baa2ba0e3036c297580d99c7cc9a79f67a749c3ea9e81d8"} Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.024022 5049 scope.go:117] "RemoveContainer" containerID="8cd461ad9bf1926064834ac21cc60bdadbfce56ec6294436afbada274aa700fe" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.027164 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d","Type":"ContainerStarted","Data":"a4d6f98353487cfc1ed0218036e8ac7bff16e6b6d8e25bc0ddf5cc53b3b88ebd"} Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.032548 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f945afb5-2a45-470e-904e-e461ac54c9a7","Type":"ContainerDied","Data":"8f1f91d4c7c26680fb601ef2b80eca728ecb132069b15a7a5e693035cc5c849e"} Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.032633 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.045208 5049 scope.go:117] "RemoveContainer" containerID="93a6a35a473d2136aba8dae49bc1a35769f481049393cd59176a1710a5714729" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.076063 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.088520 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.089851 5049 scope.go:117] "RemoveContainer" containerID="5458297a0ff91d47b4237c2b12d49887ae08d8ef7681fe111a6b9441ff9957cd" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.097153 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.107539 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: E0127 18:32:43.108071 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-metadata" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.108105 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-metadata" Jan 27 18:32:43 crc kubenswrapper[5049]: E0127 18:32:43.108122 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-log" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.108132 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-log" Jan 27 18:32:43 crc kubenswrapper[5049]: E0127 18:32:43.108148 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-api" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.108156 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-api" Jan 27 18:32:43 crc kubenswrapper[5049]: E0127 18:32:43.108187 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-log" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.108195 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-log" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.108409 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-log" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.108436 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-log" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.108454 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" containerName="nova-api-api" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.108482 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" containerName="nova-metadata-metadata" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.109600 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.117270 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.120536 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.127902 5049 scope.go:117] "RemoveContainer" containerID="58b3092c34544b9e4f8f7e9636381ea48186f7a8d0d779a2e1fcb35a0fa5d3ff" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.132823 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.178214 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.180493 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.180572 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6lk2\" (UniqueName: \"kubernetes.io/projected/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-kube-api-access-z6lk2\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.180604 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-logs\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.180646 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-config-data\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.182400 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.184874 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.189713 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.282609 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.282734 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6lk2\" (UniqueName: \"kubernetes.io/projected/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-kube-api-access-z6lk2\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.282758 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-logs\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.282783 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcph7\" (UniqueName: \"kubernetes.io/projected/821290a0-bf1f-4ad2-b28c-b3065d704a40-kube-api-access-bcph7\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.282805 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-config-data\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.282872 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821290a0-bf1f-4ad2-b28c-b3065d704a40-logs\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.282892 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.282923 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-config-data\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.283420 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-logs\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.289487 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.290940 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-config-data\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.302335 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6lk2\" (UniqueName: \"kubernetes.io/projected/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-kube-api-access-z6lk2\") pod \"nova-metadata-0\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.384473 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821290a0-bf1f-4ad2-b28c-b3065d704a40-logs\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.384805 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.385348 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-config-data\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.384914 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821290a0-bf1f-4ad2-b28c-b3065d704a40-logs\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.385723 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcph7\" (UniqueName: \"kubernetes.io/projected/821290a0-bf1f-4ad2-b28c-b3065d704a40-kube-api-access-bcph7\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.388328 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.392296 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-config-data\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.401479 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcph7\" (UniqueName: \"kubernetes.io/projected/821290a0-bf1f-4ad2-b28c-b3065d704a40-kube-api-access-bcph7\") pod \"nova-api-0\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.451893 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.501255 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.663242 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90761aed-d27a-4fba-8246-1bd69b1faa72" path="/var/lib/kubelet/pods/90761aed-d27a-4fba-8246-1bd69b1faa72/volumes" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.664299 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af80a74d-27fc-4c92-b19c-f71f1bd8dba5" path="/var/lib/kubelet/pods/af80a74d-27fc-4c92-b19c-f71f1bd8dba5/volumes" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.664977 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f945afb5-2a45-470e-904e-e461ac54c9a7" path="/var/lib/kubelet/pods/f945afb5-2a45-470e-904e-e461ac54c9a7/volumes" Jan 27 18:32:43 crc kubenswrapper[5049]: I0127 18:32:43.927381 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:32:43 crc kubenswrapper[5049]: W0127 18:32:43.930932 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc03ce4e8_c3ee_491f_834a_1bceadb58aa9.slice/crio-d1baf755322455d5d53d4fc785cec0a4be865b5b69778d2ccb607fd39c5b26e0 WatchSource:0}: Error finding container d1baf755322455d5d53d4fc785cec0a4be865b5b69778d2ccb607fd39c5b26e0: Status 404 returned error can't find the container with id d1baf755322455d5d53d4fc785cec0a4be865b5b69778d2ccb607fd39c5b26e0 Jan 27 18:32:44 crc kubenswrapper[5049]: I0127 18:32:44.033688 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:32:44 crc kubenswrapper[5049]: W0127 18:32:44.037703 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod821290a0_bf1f_4ad2_b28c_b3065d704a40.slice/crio-ec7028a08fe31e6ab13503e5669831ad2424eb41606a74a940c63290629d5cdb WatchSource:0}: Error finding container ec7028a08fe31e6ab13503e5669831ad2424eb41606a74a940c63290629d5cdb: Status 404 returned error can't find the container with id ec7028a08fe31e6ab13503e5669831ad2424eb41606a74a940c63290629d5cdb Jan 27 18:32:44 crc kubenswrapper[5049]: I0127 18:32:44.046349 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d","Type":"ContainerStarted","Data":"fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47"} Jan 27 18:32:44 crc kubenswrapper[5049]: I0127 18:32:44.048028 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c03ce4e8-c3ee-491f-834a-1bceadb58aa9","Type":"ContainerStarted","Data":"d1baf755322455d5d53d4fc785cec0a4be865b5b69778d2ccb607fd39c5b26e0"} Jan 27 18:32:44 crc kubenswrapper[5049]: I0127 18:32:44.067134 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.067112375 podStartE2EDuration="2.067112375s" podCreationTimestamp="2026-01-27 18:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:44.06021471 +0000 UTC m=+5739.159188259" watchObservedRunningTime="2026-01-27 18:32:44.067112375 +0000 UTC m=+5739.166085924" Jan 27 18:32:45 crc kubenswrapper[5049]: I0127 18:32:45.058820 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"821290a0-bf1f-4ad2-b28c-b3065d704a40","Type":"ContainerStarted","Data":"39cb2cd5dcc52237a8bf7e4e1e70534d480040bef6b39acbe2d26693236a1c7a"} Jan 27 18:32:45 crc kubenswrapper[5049]: I0127 18:32:45.060043 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"821290a0-bf1f-4ad2-b28c-b3065d704a40","Type":"ContainerStarted","Data":"c6d6d842f96dba07f7ccce130d35b8a83146b1f258fd9547e5e3a427a0181164"} Jan 27 18:32:45 crc kubenswrapper[5049]: I0127 18:32:45.060127 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"821290a0-bf1f-4ad2-b28c-b3065d704a40","Type":"ContainerStarted","Data":"ec7028a08fe31e6ab13503e5669831ad2424eb41606a74a940c63290629d5cdb"} Jan 27 18:32:45 crc kubenswrapper[5049]: I0127 18:32:45.062816 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c03ce4e8-c3ee-491f-834a-1bceadb58aa9","Type":"ContainerStarted","Data":"1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098"} Jan 27 18:32:45 crc kubenswrapper[5049]: I0127 18:32:45.063082 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c03ce4e8-c3ee-491f-834a-1bceadb58aa9","Type":"ContainerStarted","Data":"621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49"} Jan 27 18:32:45 crc kubenswrapper[5049]: I0127 18:32:45.083445 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.083423449 podStartE2EDuration="2.083423449s" podCreationTimestamp="2026-01-27 18:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:45.077965995 +0000 UTC m=+5740.176939564" watchObservedRunningTime="2026-01-27 18:32:45.083423449 +0000 UTC m=+5740.182396998" Jan 27 18:32:45 crc kubenswrapper[5049]: I0127 18:32:45.099173 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.099151653 podStartE2EDuration="2.099151653s" podCreationTimestamp="2026-01-27 18:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:32:45.095800718 +0000 UTC m=+5740.194774267" watchObservedRunningTime="2026-01-27 18:32:45.099151653 +0000 UTC m=+5740.198125202" Jan 27 18:32:47 crc kubenswrapper[5049]: I0127 18:32:47.417467 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:32:47 crc kubenswrapper[5049]: I0127 18:32:47.780949 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:32:47 crc kubenswrapper[5049]: I0127 18:32:47.780994 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:32:48 crc kubenswrapper[5049]: I0127 18:32:48.453136 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:32:48 crc kubenswrapper[5049]: I0127 18:32:48.453228 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:32:52 crc kubenswrapper[5049]: I0127 18:32:52.417736 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 18:32:52 crc kubenswrapper[5049]: I0127 18:32:52.450261 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 18:32:53 crc kubenswrapper[5049]: I0127 18:32:53.166266 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 18:32:53 crc kubenswrapper[5049]: I0127 18:32:53.453276 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:32:53 crc kubenswrapper[5049]: I0127 18:32:53.453335 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:32:53 crc kubenswrapper[5049]: I0127 18:32:53.502829 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:32:53 crc kubenswrapper[5049]: I0127 18:32:53.502893 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:32:54 crc kubenswrapper[5049]: I0127 18:32:54.535986 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:32:54 crc kubenswrapper[5049]: I0127 18:32:54.535954 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.75:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:32:54 crc kubenswrapper[5049]: I0127 18:32:54.617828 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:32:54 crc kubenswrapper[5049]: I0127 18:32:54.617857 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.76:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:33:03 crc kubenswrapper[5049]: I0127 18:33:03.469300 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:33:03 crc kubenswrapper[5049]: I0127 18:33:03.475126 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:33:03 crc kubenswrapper[5049]: I0127 18:33:03.478504 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:33:03 crc kubenswrapper[5049]: I0127 18:33:03.506359 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:33:03 crc kubenswrapper[5049]: I0127 18:33:03.506721 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:33:03 crc kubenswrapper[5049]: I0127 18:33:03.507331 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:33:03 crc kubenswrapper[5049]: I0127 18:33:03.508799 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.253712 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.255305 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.257299 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.532311 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f59b64fd5-7sl9m"] Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.536697 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.552973 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j47j\" (UniqueName: \"kubernetes.io/projected/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-kube-api-access-5j47j\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.553095 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-config\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.553157 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-dns-svc\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.553210 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.553285 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.565594 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f59b64fd5-7sl9m"] Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.655539 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j47j\" (UniqueName: \"kubernetes.io/projected/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-kube-api-access-5j47j\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.655644 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-config\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.655734 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-dns-svc\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.655780 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.655843 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.657474 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-nb\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.657702 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-config\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.657895 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-dns-svc\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.658446 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-sb\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.683917 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j47j\" (UniqueName: \"kubernetes.io/projected/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-kube-api-access-5j47j\") pod \"dnsmasq-dns-f59b64fd5-7sl9m\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:04 crc kubenswrapper[5049]: I0127 18:33:04.862525 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:05 crc kubenswrapper[5049]: I0127 18:33:05.360281 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f59b64fd5-7sl9m"] Jan 27 18:33:05 crc kubenswrapper[5049]: W0127 18:33:05.370876 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0a48bc_ddff_4c70_b9bf_18bcab398d7c.slice/crio-9886d95f8884a4786e2fb8543b01c85563204d4fc739a90b03238d8d5143b8b9 WatchSource:0}: Error finding container 9886d95f8884a4786e2fb8543b01c85563204d4fc739a90b03238d8d5143b8b9: Status 404 returned error can't find the container with id 9886d95f8884a4786e2fb8543b01c85563204d4fc739a90b03238d8d5143b8b9 Jan 27 18:33:06 crc kubenswrapper[5049]: I0127 18:33:06.281389 5049 generic.go:334] "Generic (PLEG): container finished" podID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" containerID="e8ba2a71b27b3badf1911d9835c408ec40d6d544ae7c64b1c89f173898e2e47a" exitCode=0 Jan 27 18:33:06 crc kubenswrapper[5049]: I0127 18:33:06.283857 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" event={"ID":"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c","Type":"ContainerDied","Data":"e8ba2a71b27b3badf1911d9835c408ec40d6d544ae7c64b1c89f173898e2e47a"} Jan 27 18:33:06 crc kubenswrapper[5049]: I0127 18:33:06.283917 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" event={"ID":"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c","Type":"ContainerStarted","Data":"9886d95f8884a4786e2fb8543b01c85563204d4fc739a90b03238d8d5143b8b9"} Jan 27 18:33:07 crc kubenswrapper[5049]: I0127 18:33:07.292362 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" event={"ID":"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c","Type":"ContainerStarted","Data":"686c5a3b1e0fc9575a1e9bdcb4079b7eb48a51d614932ead364cc31b138a7600"} Jan 27 18:33:07 crc kubenswrapper[5049]: I0127 18:33:07.292917 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:07 crc kubenswrapper[5049]: I0127 18:33:07.313620 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" podStartSLOduration=3.31359877 podStartE2EDuration="3.31359877s" podCreationTimestamp="2026-01-27 18:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:07.310206184 +0000 UTC m=+5762.409179733" watchObservedRunningTime="2026-01-27 18:33:07.31359877 +0000 UTC m=+5762.412572319" Jan 27 18:33:14 crc kubenswrapper[5049]: I0127 18:33:14.864395 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:14 crc kubenswrapper[5049]: I0127 18:33:14.935933 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6db99c5957-nklqk"] Jan 27 18:33:14 crc kubenswrapper[5049]: I0127 18:33:14.936166 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" podUID="e959befa-4eff-46e2-853c-b057db776837" containerName="dnsmasq-dns" containerID="cri-o://a1539a1897a85aa3eba9e579e189afcb279f4334361660b64de328ac2ba56494" gracePeriod=10 Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.127795 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" podUID="e959befa-4eff-46e2-853c-b057db776837" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.67:5353: connect: connection refused" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.363192 5049 generic.go:334] "Generic (PLEG): container finished" podID="e959befa-4eff-46e2-853c-b057db776837" containerID="a1539a1897a85aa3eba9e579e189afcb279f4334361660b64de328ac2ba56494" exitCode=0 Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.363229 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" event={"ID":"e959befa-4eff-46e2-853c-b057db776837","Type":"ContainerDied","Data":"a1539a1897a85aa3eba9e579e189afcb279f4334361660b64de328ac2ba56494"} Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.600812 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.765239 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-dns-svc\") pod \"e959befa-4eff-46e2-853c-b057db776837\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.765385 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-sb\") pod \"e959befa-4eff-46e2-853c-b057db776837\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.765417 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-config\") pod \"e959befa-4eff-46e2-853c-b057db776837\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.765612 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9jhf\" (UniqueName: \"kubernetes.io/projected/e959befa-4eff-46e2-853c-b057db776837-kube-api-access-s9jhf\") pod \"e959befa-4eff-46e2-853c-b057db776837\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.765699 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-nb\") pod \"e959befa-4eff-46e2-853c-b057db776837\" (UID: \"e959befa-4eff-46e2-853c-b057db776837\") " Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.784996 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e959befa-4eff-46e2-853c-b057db776837-kube-api-access-s9jhf" (OuterVolumeSpecName: "kube-api-access-s9jhf") pod "e959befa-4eff-46e2-853c-b057db776837" (UID: "e959befa-4eff-46e2-853c-b057db776837"). InnerVolumeSpecName "kube-api-access-s9jhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.818537 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-config" (OuterVolumeSpecName: "config") pod "e959befa-4eff-46e2-853c-b057db776837" (UID: "e959befa-4eff-46e2-853c-b057db776837"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.818699 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e959befa-4eff-46e2-853c-b057db776837" (UID: "e959befa-4eff-46e2-853c-b057db776837"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.834105 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e959befa-4eff-46e2-853c-b057db776837" (UID: "e959befa-4eff-46e2-853c-b057db776837"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.847353 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e959befa-4eff-46e2-853c-b057db776837" (UID: "e959befa-4eff-46e2-853c-b057db776837"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.869120 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.869199 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.869211 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9jhf\" (UniqueName: \"kubernetes.io/projected/e959befa-4eff-46e2-853c-b057db776837-kube-api-access-s9jhf\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.869221 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:15 crc kubenswrapper[5049]: I0127 18:33:15.869231 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e959befa-4eff-46e2-853c-b057db776837-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:16 crc kubenswrapper[5049]: I0127 18:33:16.373999 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" event={"ID":"e959befa-4eff-46e2-853c-b057db776837","Type":"ContainerDied","Data":"c24d19727d5e59447bd671a24c2c0876cdb0bca39eb94c9423ac2c359e218cde"} Jan 27 18:33:16 crc kubenswrapper[5049]: I0127 18:33:16.374052 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6db99c5957-nklqk" Jan 27 18:33:16 crc kubenswrapper[5049]: I0127 18:33:16.374061 5049 scope.go:117] "RemoveContainer" containerID="a1539a1897a85aa3eba9e579e189afcb279f4334361660b64de328ac2ba56494" Jan 27 18:33:16 crc kubenswrapper[5049]: I0127 18:33:16.392577 5049 scope.go:117] "RemoveContainer" containerID="501867b99c463bb3db6eb7494140fb34be7cb4badf7e5688ee718481e2df9fab" Jan 27 18:33:16 crc kubenswrapper[5049]: I0127 18:33:16.413888 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6db99c5957-nklqk"] Jan 27 18:33:16 crc kubenswrapper[5049]: I0127 18:33:16.423430 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6db99c5957-nklqk"] Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.420627 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-z4wp7"] Jan 27 18:33:17 crc kubenswrapper[5049]: E0127 18:33:17.421139 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e959befa-4eff-46e2-853c-b057db776837" containerName="dnsmasq-dns" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.421157 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e959befa-4eff-46e2-853c-b057db776837" containerName="dnsmasq-dns" Jan 27 18:33:17 crc kubenswrapper[5049]: E0127 18:33:17.421191 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e959befa-4eff-46e2-853c-b057db776837" containerName="init" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.421198 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e959befa-4eff-46e2-853c-b057db776837" containerName="init" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.421417 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e959befa-4eff-46e2-853c-b057db776837" containerName="dnsmasq-dns" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.422137 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.431499 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-z4wp7"] Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.505161 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f552469c-419b-4eeb-9c8b-9b47f74d74c1-operator-scripts\") pod \"cinder-db-create-z4wp7\" (UID: \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\") " pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.505342 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjlqc\" (UniqueName: \"kubernetes.io/projected/f552469c-419b-4eeb-9c8b-9b47f74d74c1-kube-api-access-jjlqc\") pod \"cinder-db-create-z4wp7\" (UID: \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\") " pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.515465 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c611-account-create-update-zhhj6"] Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.516858 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.518748 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.523899 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c611-account-create-update-zhhj6"] Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.607479 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f552469c-419b-4eeb-9c8b-9b47f74d74c1-operator-scripts\") pod \"cinder-db-create-z4wp7\" (UID: \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\") " pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.607740 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjlqc\" (UniqueName: \"kubernetes.io/projected/f552469c-419b-4eeb-9c8b-9b47f74d74c1-kube-api-access-jjlqc\") pod \"cinder-db-create-z4wp7\" (UID: \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\") " pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.608179 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4twh\" (UniqueName: \"kubernetes.io/projected/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-kube-api-access-r4twh\") pod \"cinder-c611-account-create-update-zhhj6\" (UID: \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\") " pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.609377 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-operator-scripts\") pod \"cinder-c611-account-create-update-zhhj6\" (UID: \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\") " pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.608602 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f552469c-419b-4eeb-9c8b-9b47f74d74c1-operator-scripts\") pod \"cinder-db-create-z4wp7\" (UID: \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\") " pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.636629 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjlqc\" (UniqueName: \"kubernetes.io/projected/f552469c-419b-4eeb-9c8b-9b47f74d74c1-kube-api-access-jjlqc\") pod \"cinder-db-create-z4wp7\" (UID: \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\") " pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.656916 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e959befa-4eff-46e2-853c-b057db776837" path="/var/lib/kubelet/pods/e959befa-4eff-46e2-853c-b057db776837/volumes" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.711227 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-operator-scripts\") pod \"cinder-c611-account-create-update-zhhj6\" (UID: \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\") " pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.711439 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4twh\" (UniqueName: \"kubernetes.io/projected/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-kube-api-access-r4twh\") pod \"cinder-c611-account-create-update-zhhj6\" (UID: \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\") " pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.712030 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-operator-scripts\") pod \"cinder-c611-account-create-update-zhhj6\" (UID: \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\") " pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.734389 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4twh\" (UniqueName: \"kubernetes.io/projected/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-kube-api-access-r4twh\") pod \"cinder-c611-account-create-update-zhhj6\" (UID: \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\") " pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.759177 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.781922 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.781991 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:33:17 crc kubenswrapper[5049]: I0127 18:33:17.831235 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:18 crc kubenswrapper[5049]: I0127 18:33:18.298046 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-z4wp7"] Jan 27 18:33:18 crc kubenswrapper[5049]: I0127 18:33:18.410224 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-z4wp7" event={"ID":"f552469c-419b-4eeb-9c8b-9b47f74d74c1","Type":"ContainerStarted","Data":"3997a0155f76ead3b625c9abe8d9e0eee8978647ba1350f742d772d1bbd96ff8"} Jan 27 18:33:18 crc kubenswrapper[5049]: I0127 18:33:18.536096 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c611-account-create-update-zhhj6"] Jan 27 18:33:18 crc kubenswrapper[5049]: W0127 18:33:18.539296 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bc0fff6_620d_4f4d_9378_b64d4e8c686c.slice/crio-5c3cd2dd8e56669dae30ebd37a163236f29c0486fbe894ff76f60cf03d138297 WatchSource:0}: Error finding container 5c3cd2dd8e56669dae30ebd37a163236f29c0486fbe894ff76f60cf03d138297: Status 404 returned error can't find the container with id 5c3cd2dd8e56669dae30ebd37a163236f29c0486fbe894ff76f60cf03d138297 Jan 27 18:33:19 crc kubenswrapper[5049]: I0127 18:33:19.422303 5049 generic.go:334] "Generic (PLEG): container finished" podID="9bc0fff6-620d-4f4d-9378-b64d4e8c686c" containerID="99fb83e0736ed0cb24cb4609318a981b9920652c6eea732f3673ea349c87b4bc" exitCode=0 Jan 27 18:33:19 crc kubenswrapper[5049]: I0127 18:33:19.422434 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c611-account-create-update-zhhj6" event={"ID":"9bc0fff6-620d-4f4d-9378-b64d4e8c686c","Type":"ContainerDied","Data":"99fb83e0736ed0cb24cb4609318a981b9920652c6eea732f3673ea349c87b4bc"} Jan 27 18:33:19 crc kubenswrapper[5049]: I0127 18:33:19.422731 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c611-account-create-update-zhhj6" event={"ID":"9bc0fff6-620d-4f4d-9378-b64d4e8c686c","Type":"ContainerStarted","Data":"5c3cd2dd8e56669dae30ebd37a163236f29c0486fbe894ff76f60cf03d138297"} Jan 27 18:33:19 crc kubenswrapper[5049]: I0127 18:33:19.424409 5049 generic.go:334] "Generic (PLEG): container finished" podID="f552469c-419b-4eeb-9c8b-9b47f74d74c1" containerID="a5de39a689fab0e265f5a87b7755a03098b2caee501f1b62c595a9dc890d464f" exitCode=0 Jan 27 18:33:19 crc kubenswrapper[5049]: I0127 18:33:19.424466 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-z4wp7" event={"ID":"f552469c-419b-4eeb-9c8b-9b47f74d74c1","Type":"ContainerDied","Data":"a5de39a689fab0e265f5a87b7755a03098b2caee501f1b62c595a9dc890d464f"} Jan 27 18:33:20 crc kubenswrapper[5049]: I0127 18:33:20.941132 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:20 crc kubenswrapper[5049]: I0127 18:33:20.949344 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.076482 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f552469c-419b-4eeb-9c8b-9b47f74d74c1-operator-scripts\") pod \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\" (UID: \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\") " Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.076589 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4twh\" (UniqueName: \"kubernetes.io/projected/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-kube-api-access-r4twh\") pod \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\" (UID: \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\") " Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.076624 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjlqc\" (UniqueName: \"kubernetes.io/projected/f552469c-419b-4eeb-9c8b-9b47f74d74c1-kube-api-access-jjlqc\") pod \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\" (UID: \"f552469c-419b-4eeb-9c8b-9b47f74d74c1\") " Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.076890 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-operator-scripts\") pod \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\" (UID: \"9bc0fff6-620d-4f4d-9378-b64d4e8c686c\") " Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.077328 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f552469c-419b-4eeb-9c8b-9b47f74d74c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f552469c-419b-4eeb-9c8b-9b47f74d74c1" (UID: "f552469c-419b-4eeb-9c8b-9b47f74d74c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.077800 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bc0fff6-620d-4f4d-9378-b64d4e8c686c" (UID: "9bc0fff6-620d-4f4d-9378-b64d4e8c686c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.087089 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-kube-api-access-r4twh" (OuterVolumeSpecName: "kube-api-access-r4twh") pod "9bc0fff6-620d-4f4d-9378-b64d4e8c686c" (UID: "9bc0fff6-620d-4f4d-9378-b64d4e8c686c"). InnerVolumeSpecName "kube-api-access-r4twh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.087957 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f552469c-419b-4eeb-9c8b-9b47f74d74c1-kube-api-access-jjlqc" (OuterVolumeSpecName: "kube-api-access-jjlqc") pod "f552469c-419b-4eeb-9c8b-9b47f74d74c1" (UID: "f552469c-419b-4eeb-9c8b-9b47f74d74c1"). InnerVolumeSpecName "kube-api-access-jjlqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.179077 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.179130 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f552469c-419b-4eeb-9c8b-9b47f74d74c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.179140 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4twh\" (UniqueName: \"kubernetes.io/projected/9bc0fff6-620d-4f4d-9378-b64d4e8c686c-kube-api-access-r4twh\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.179151 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjlqc\" (UniqueName: \"kubernetes.io/projected/f552469c-419b-4eeb-9c8b-9b47f74d74c1-kube-api-access-jjlqc\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.444831 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-z4wp7" event={"ID":"f552469c-419b-4eeb-9c8b-9b47f74d74c1","Type":"ContainerDied","Data":"3997a0155f76ead3b625c9abe8d9e0eee8978647ba1350f742d772d1bbd96ff8"} Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.444880 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3997a0155f76ead3b625c9abe8d9e0eee8978647ba1350f742d772d1bbd96ff8" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.444891 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-z4wp7" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.446987 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c611-account-create-update-zhhj6" event={"ID":"9bc0fff6-620d-4f4d-9378-b64d4e8c686c","Type":"ContainerDied","Data":"5c3cd2dd8e56669dae30ebd37a163236f29c0486fbe894ff76f60cf03d138297"} Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.447026 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c3cd2dd8e56669dae30ebd37a163236f29c0486fbe894ff76f60cf03d138297" Jan 27 18:33:21 crc kubenswrapper[5049]: I0127 18:33:21.447106 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c611-account-create-update-zhhj6" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.753657 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-ktn6t"] Jan 27 18:33:22 crc kubenswrapper[5049]: E0127 18:33:22.754423 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc0fff6-620d-4f4d-9378-b64d4e8c686c" containerName="mariadb-account-create-update" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.754439 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc0fff6-620d-4f4d-9378-b64d4e8c686c" containerName="mariadb-account-create-update" Jan 27 18:33:22 crc kubenswrapper[5049]: E0127 18:33:22.754491 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f552469c-419b-4eeb-9c8b-9b47f74d74c1" containerName="mariadb-database-create" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.754501 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f552469c-419b-4eeb-9c8b-9b47f74d74c1" containerName="mariadb-database-create" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.754737 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f552469c-419b-4eeb-9c8b-9b47f74d74c1" containerName="mariadb-database-create" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.754767 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc0fff6-620d-4f4d-9378-b64d4e8c686c" containerName="mariadb-account-create-update" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.755492 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.762252 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.762252 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7wmxh" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.770236 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ktn6t"] Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.772276 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.811685 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-config-data\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.811736 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnjz\" (UniqueName: \"kubernetes.io/projected/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-kube-api-access-shnjz\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.811794 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-combined-ca-bundle\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.811984 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-db-sync-config-data\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.812311 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-etc-machine-id\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.812523 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-scripts\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.913623 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-scripts\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.913712 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-config-data\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.913783 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shnjz\" (UniqueName: \"kubernetes.io/projected/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-kube-api-access-shnjz\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.914150 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-combined-ca-bundle\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.914187 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-db-sync-config-data\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.914242 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-etc-machine-id\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.914359 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-etc-machine-id\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.922908 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-db-sync-config-data\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.922969 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-config-data\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.928287 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-scripts\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.928515 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-combined-ca-bundle\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:22 crc kubenswrapper[5049]: I0127 18:33:22.935968 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shnjz\" (UniqueName: \"kubernetes.io/projected/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-kube-api-access-shnjz\") pod \"cinder-db-sync-ktn6t\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:23 crc kubenswrapper[5049]: I0127 18:33:23.086244 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:23 crc kubenswrapper[5049]: W0127 18:33:23.570618 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe4fdc4_d12a_4de8_b88c_298bfb0567b8.slice/crio-255a7a21b67812c0a5792f339c4a5a2dde6c01457758f236e10a9eed15529194 WatchSource:0}: Error finding container 255a7a21b67812c0a5792f339c4a5a2dde6c01457758f236e10a9eed15529194: Status 404 returned error can't find the container with id 255a7a21b67812c0a5792f339c4a5a2dde6c01457758f236e10a9eed15529194 Jan 27 18:33:23 crc kubenswrapper[5049]: I0127 18:33:23.573540 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-ktn6t"] Jan 27 18:33:24 crc kubenswrapper[5049]: I0127 18:33:24.470960 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ktn6t" event={"ID":"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8","Type":"ContainerStarted","Data":"9c3a55c28dfb494fff1ee02c60f86f56efcef10ce7ef21d11826522cb54bafc4"} Jan 27 18:33:24 crc kubenswrapper[5049]: I0127 18:33:24.471523 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ktn6t" event={"ID":"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8","Type":"ContainerStarted","Data":"255a7a21b67812c0a5792f339c4a5a2dde6c01457758f236e10a9eed15529194"} Jan 27 18:33:24 crc kubenswrapper[5049]: I0127 18:33:24.491525 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-ktn6t" podStartSLOduration=2.491506062 podStartE2EDuration="2.491506062s" podCreationTimestamp="2026-01-27 18:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:24.485360188 +0000 UTC m=+5779.584333737" watchObservedRunningTime="2026-01-27 18:33:24.491506062 +0000 UTC m=+5779.590479611" Jan 27 18:33:28 crc kubenswrapper[5049]: I0127 18:33:28.511979 5049 generic.go:334] "Generic (PLEG): container finished" podID="dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" containerID="9c3a55c28dfb494fff1ee02c60f86f56efcef10ce7ef21d11826522cb54bafc4" exitCode=0 Jan 27 18:33:28 crc kubenswrapper[5049]: I0127 18:33:28.512082 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ktn6t" event={"ID":"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8","Type":"ContainerDied","Data":"9c3a55c28dfb494fff1ee02c60f86f56efcef10ce7ef21d11826522cb54bafc4"} Jan 27 18:33:29 crc kubenswrapper[5049]: I0127 18:33:29.922602 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.037041 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-scripts\") pod \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.037272 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-etc-machine-id\") pod \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.037442 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" (UID: "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.037483 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-db-sync-config-data\") pod \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.037695 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-combined-ca-bundle\") pod \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.038277 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-config-data\") pod \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.038327 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shnjz\" (UniqueName: \"kubernetes.io/projected/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-kube-api-access-shnjz\") pod \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\" (UID: \"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8\") " Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.039298 5049 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.044370 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" (UID: "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.045009 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-scripts" (OuterVolumeSpecName: "scripts") pod "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" (UID: "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.049318 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-kube-api-access-shnjz" (OuterVolumeSpecName: "kube-api-access-shnjz") pod "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" (UID: "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8"). InnerVolumeSpecName "kube-api-access-shnjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.070518 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" (UID: "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.090209 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-config-data" (OuterVolumeSpecName: "config-data") pod "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" (UID: "dbe4fdc4-d12a-4de8-b88c-298bfb0567b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.140803 5049 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.140858 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.140871 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.140883 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shnjz\" (UniqueName: \"kubernetes.io/projected/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-kube-api-access-shnjz\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.140898 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.537061 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-ktn6t" event={"ID":"dbe4fdc4-d12a-4de8-b88c-298bfb0567b8","Type":"ContainerDied","Data":"255a7a21b67812c0a5792f339c4a5a2dde6c01457758f236e10a9eed15529194"} Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.537108 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255a7a21b67812c0a5792f339c4a5a2dde6c01457758f236e10a9eed15529194" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.537164 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-ktn6t" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.877563 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd"] Jan 27 18:33:30 crc kubenswrapper[5049]: E0127 18:33:30.877970 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" containerName="cinder-db-sync" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.877983 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" containerName="cinder-db-sync" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.878172 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" containerName="cinder-db-sync" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.884462 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.902584 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd"] Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.959604 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6w7f\" (UniqueName: \"kubernetes.io/projected/ada00ff1-234e-426a-8867-2a885fd955e1-kube-api-access-r6w7f\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.959687 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-dns-svc\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.959736 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.959768 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-config\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:30 crc kubenswrapper[5049]: I0127 18:33:30.959844 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.063883 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6w7f\" (UniqueName: \"kubernetes.io/projected/ada00ff1-234e-426a-8867-2a885fd955e1-kube-api-access-r6w7f\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.063940 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.063968 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-dns-svc\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.063990 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-config\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.064069 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.065008 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.065022 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.065553 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-dns-svc\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.067444 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ada00ff1-234e-426a-8867-2a885fd955e1-config\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.097968 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6w7f\" (UniqueName: \"kubernetes.io/projected/ada00ff1-234e-426a-8867-2a885fd955e1-kube-api-access-r6w7f\") pod \"dnsmasq-dns-5cbd8fbdcc-jwtjd\" (UID: \"ada00ff1-234e-426a-8867-2a885fd955e1\") " pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.148046 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.149945 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.151991 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.152495 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.152704 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.152718 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7wmxh" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.159408 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.165768 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxhw\" (UniqueName: \"kubernetes.io/projected/6f28a351-8073-4e1b-ba9b-c7824994fa1b-kube-api-access-mxxhw\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.165859 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-scripts\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.166077 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f28a351-8073-4e1b-ba9b-c7824994fa1b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.166118 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.166153 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f28a351-8073-4e1b-ba9b-c7824994fa1b-logs\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.166234 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.166343 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.222827 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.267903 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.267975 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxhw\" (UniqueName: \"kubernetes.io/projected/6f28a351-8073-4e1b-ba9b-c7824994fa1b-kube-api-access-mxxhw\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.268033 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-scripts\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.268075 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f28a351-8073-4e1b-ba9b-c7824994fa1b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.268110 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.268140 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f28a351-8073-4e1b-ba9b-c7824994fa1b-logs\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.268199 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.273182 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f28a351-8073-4e1b-ba9b-c7824994fa1b-logs\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.273293 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f28a351-8073-4e1b-ba9b-c7824994fa1b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.274379 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.274417 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-scripts\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.275023 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.279944 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.291748 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxhw\" (UniqueName: \"kubernetes.io/projected/6f28a351-8073-4e1b-ba9b-c7824994fa1b-kube-api-access-mxxhw\") pod \"cinder-api-0\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.487650 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:33:31 crc kubenswrapper[5049]: I0127 18:33:31.783743 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd"] Jan 27 18:33:32 crc kubenswrapper[5049]: W0127 18:33:32.002682 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f28a351_8073_4e1b_ba9b_c7824994fa1b.slice/crio-4293bc4a6d9b24e2097d94459e18043a039c57b775d224abab5962682cc4a149 WatchSource:0}: Error finding container 4293bc4a6d9b24e2097d94459e18043a039c57b775d224abab5962682cc4a149: Status 404 returned error can't find the container with id 4293bc4a6d9b24e2097d94459e18043a039c57b775d224abab5962682cc4a149 Jan 27 18:33:32 crc kubenswrapper[5049]: I0127 18:33:32.006948 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:33:32 crc kubenswrapper[5049]: I0127 18:33:32.563599 5049 generic.go:334] "Generic (PLEG): container finished" podID="ada00ff1-234e-426a-8867-2a885fd955e1" containerID="670c459bca9ccc99d73ffc2b45cc564dec6c96bd2d7e39749fd0bad9457625ac" exitCode=0 Jan 27 18:33:32 crc kubenswrapper[5049]: I0127 18:33:32.563800 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" event={"ID":"ada00ff1-234e-426a-8867-2a885fd955e1","Type":"ContainerDied","Data":"670c459bca9ccc99d73ffc2b45cc564dec6c96bd2d7e39749fd0bad9457625ac"} Jan 27 18:33:32 crc kubenswrapper[5049]: I0127 18:33:32.563831 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" event={"ID":"ada00ff1-234e-426a-8867-2a885fd955e1","Type":"ContainerStarted","Data":"258e3755500bc390430cecb14748d73bc2d895d8e65019fe044ad4a2d4640dbb"} Jan 27 18:33:32 crc kubenswrapper[5049]: I0127 18:33:32.565527 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f28a351-8073-4e1b-ba9b-c7824994fa1b","Type":"ContainerStarted","Data":"4293bc4a6d9b24e2097d94459e18043a039c57b775d224abab5962682cc4a149"} Jan 27 18:33:33 crc kubenswrapper[5049]: I0127 18:33:33.577054 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f28a351-8073-4e1b-ba9b-c7824994fa1b","Type":"ContainerStarted","Data":"272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311"} Jan 27 18:33:33 crc kubenswrapper[5049]: I0127 18:33:33.577500 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f28a351-8073-4e1b-ba9b-c7824994fa1b","Type":"ContainerStarted","Data":"09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce"} Jan 27 18:33:33 crc kubenswrapper[5049]: I0127 18:33:33.577517 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 18:33:33 crc kubenswrapper[5049]: I0127 18:33:33.580193 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" event={"ID":"ada00ff1-234e-426a-8867-2a885fd955e1","Type":"ContainerStarted","Data":"5a1a953bafe38dca1d62c7d2a2fad61537d964fedd1264027b8db9580419e3c9"} Jan 27 18:33:33 crc kubenswrapper[5049]: I0127 18:33:33.580545 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:33 crc kubenswrapper[5049]: I0127 18:33:33.621322 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" podStartSLOduration=3.621303367 podStartE2EDuration="3.621303367s" podCreationTimestamp="2026-01-27 18:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:33.615359399 +0000 UTC m=+5788.714332958" watchObservedRunningTime="2026-01-27 18:33:33.621303367 +0000 UTC m=+5788.720276916" Jan 27 18:33:33 crc kubenswrapper[5049]: I0127 18:33:33.626089 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.626071021 podStartE2EDuration="2.626071021s" podCreationTimestamp="2026-01-27 18:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:33.599139121 +0000 UTC m=+5788.698112670" watchObservedRunningTime="2026-01-27 18:33:33.626071021 +0000 UTC m=+5788.725044570" Jan 27 18:33:38 crc kubenswrapper[5049]: I0127 18:33:38.850343 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g44vx"] Jan 27 18:33:38 crc kubenswrapper[5049]: I0127 18:33:38.852581 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:38 crc kubenswrapper[5049]: I0127 18:33:38.861453 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g44vx"] Jan 27 18:33:38 crc kubenswrapper[5049]: I0127 18:33:38.917144 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-catalog-content\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:38 crc kubenswrapper[5049]: I0127 18:33:38.917472 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-utilities\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:38 crc kubenswrapper[5049]: I0127 18:33:38.917749 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h862f\" (UniqueName: \"kubernetes.io/projected/fd20540d-c693-4e71-aa96-839bd95201d5-kube-api-access-h862f\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:39 crc kubenswrapper[5049]: I0127 18:33:39.019508 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h862f\" (UniqueName: \"kubernetes.io/projected/fd20540d-c693-4e71-aa96-839bd95201d5-kube-api-access-h862f\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:39 crc kubenswrapper[5049]: I0127 18:33:39.019619 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-catalog-content\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:39 crc kubenswrapper[5049]: I0127 18:33:39.019648 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-utilities\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:39 crc kubenswrapper[5049]: I0127 18:33:39.020315 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-utilities\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:39 crc kubenswrapper[5049]: I0127 18:33:39.020486 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-catalog-content\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:39 crc kubenswrapper[5049]: I0127 18:33:39.039735 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h862f\" (UniqueName: \"kubernetes.io/projected/fd20540d-c693-4e71-aa96-839bd95201d5-kube-api-access-h862f\") pod \"certified-operators-g44vx\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:39 crc kubenswrapper[5049]: I0127 18:33:39.178581 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:39 crc kubenswrapper[5049]: I0127 18:33:39.786716 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g44vx"] Jan 27 18:33:39 crc kubenswrapper[5049]: W0127 18:33:39.797790 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd20540d_c693_4e71_aa96_839bd95201d5.slice/crio-3be98ef3327fc01fc65b35721fbf3d7814284de287848dbbe44cebcf6987e51e WatchSource:0}: Error finding container 3be98ef3327fc01fc65b35721fbf3d7814284de287848dbbe44cebcf6987e51e: Status 404 returned error can't find the container with id 3be98ef3327fc01fc65b35721fbf3d7814284de287848dbbe44cebcf6987e51e Jan 27 18:33:40 crc kubenswrapper[5049]: I0127 18:33:40.653290 5049 generic.go:334] "Generic (PLEG): container finished" podID="fd20540d-c693-4e71-aa96-839bd95201d5" containerID="4766049ae5ca5b17b3465a59e5b78186448ba5ae7760c92e5466dad6592604de" exitCode=0 Jan 27 18:33:40 crc kubenswrapper[5049]: I0127 18:33:40.653329 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g44vx" event={"ID":"fd20540d-c693-4e71-aa96-839bd95201d5","Type":"ContainerDied","Data":"4766049ae5ca5b17b3465a59e5b78186448ba5ae7760c92e5466dad6592604de"} Jan 27 18:33:40 crc kubenswrapper[5049]: I0127 18:33:40.653686 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g44vx" event={"ID":"fd20540d-c693-4e71-aa96-839bd95201d5","Type":"ContainerStarted","Data":"3be98ef3327fc01fc65b35721fbf3d7814284de287848dbbe44cebcf6987e51e"} Jan 27 18:33:40 crc kubenswrapper[5049]: I0127 18:33:40.655193 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.224723 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cbd8fbdcc-jwtjd" Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.283329 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f59b64fd5-7sl9m"] Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.283555 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" podUID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" containerName="dnsmasq-dns" containerID="cri-o://686c5a3b1e0fc9575a1e9bdcb4079b7eb48a51d614932ead364cc31b138a7600" gracePeriod=10 Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.745470 5049 generic.go:334] "Generic (PLEG): container finished" podID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" containerID="686c5a3b1e0fc9575a1e9bdcb4079b7eb48a51d614932ead364cc31b138a7600" exitCode=0 Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.745903 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" event={"ID":"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c","Type":"ContainerDied","Data":"686c5a3b1e0fc9575a1e9bdcb4079b7eb48a51d614932ead364cc31b138a7600"} Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.833613 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.873054 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j47j\" (UniqueName: \"kubernetes.io/projected/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-kube-api-access-5j47j\") pod \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.873123 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-dns-svc\") pod \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.873226 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-nb\") pod \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.873381 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-config\") pod \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.873416 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-sb\") pod \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\" (UID: \"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c\") " Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.896809 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-kube-api-access-5j47j" (OuterVolumeSpecName: "kube-api-access-5j47j") pod "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" (UID: "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c"). InnerVolumeSpecName "kube-api-access-5j47j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.975887 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j47j\" (UniqueName: \"kubernetes.io/projected/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-kube-api-access-5j47j\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:41 crc kubenswrapper[5049]: I0127 18:33:41.978316 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" (UID: "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.006886 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" (UID: "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.010394 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-config" (OuterVolumeSpecName: "config") pod "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" (UID: "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.014171 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" (UID: "5e0a48bc-ddff-4c70-b9bf-18bcab398d7c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.077824 5049 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.078072 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.078134 5049 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.078214 5049 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.756067 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" event={"ID":"5e0a48bc-ddff-4c70-b9bf-18bcab398d7c","Type":"ContainerDied","Data":"9886d95f8884a4786e2fb8543b01c85563204d4fc739a90b03238d8d5143b8b9"} Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.756119 5049 scope.go:117] "RemoveContainer" containerID="686c5a3b1e0fc9575a1e9bdcb4079b7eb48a51d614932ead364cc31b138a7600" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.757346 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f59b64fd5-7sl9m" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.759672 5049 generic.go:334] "Generic (PLEG): container finished" podID="fd20540d-c693-4e71-aa96-839bd95201d5" containerID="5aa093e47c82341bfccb40a4eb7b18e6d2f40f977db993104fdd3cea733dfa1d" exitCode=0 Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.759710 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g44vx" event={"ID":"fd20540d-c693-4e71-aa96-839bd95201d5","Type":"ContainerDied","Data":"5aa093e47c82341bfccb40a4eb7b18e6d2f40f977db993104fdd3cea733dfa1d"} Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.781687 5049 scope.go:117] "RemoveContainer" containerID="e8ba2a71b27b3badf1911d9835c408ec40d6d544ae7c64b1c89f173898e2e47a" Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.835520 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f59b64fd5-7sl9m"] Jan 27 18:33:42 crc kubenswrapper[5049]: I0127 18:33:42.858532 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f59b64fd5-7sl9m"] Jan 27 18:33:43 crc kubenswrapper[5049]: I0127 18:33:43.658371 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" path="/var/lib/kubelet/pods/5e0a48bc-ddff-4c70-b9bf-18bcab398d7c/volumes" Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.010318 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.011010 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-log" containerID="cri-o://621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49" gracePeriod=30 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.011211 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-metadata" containerID="cri-o://1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098" gracePeriod=30 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.031694 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.032504 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" containerName="nova-scheduler-scheduler" containerID="cri-o://fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47" gracePeriod=30 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.090732 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.090928 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="e3a94daa-7472-434d-9137-fe254ac3027e" containerName="nova-cell1-conductor-conductor" containerID="cri-o://cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4" gracePeriod=30 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.103324 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.103575 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="75ca3e57-fb44-41a7-8ce4-a88f81a418a7" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d9bde9784d3f5bc32d7f2e5df6f17cac55342325149ffc02f5805bd6f6a7c95e" gracePeriod=30 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.118918 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.119177 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-log" containerID="cri-o://c6d6d842f96dba07f7ccce130d35b8a83146b1f258fd9547e5e3a427a0181164" gracePeriod=30 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.119232 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-api" containerID="cri-o://39cb2cd5dcc52237a8bf7e4e1e70534d480040bef6b39acbe2d26693236a1c7a" gracePeriod=30 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.778804 5049 generic.go:334] "Generic (PLEG): container finished" podID="75ca3e57-fb44-41a7-8ce4-a88f81a418a7" containerID="d9bde9784d3f5bc32d7f2e5df6f17cac55342325149ffc02f5805bd6f6a7c95e" exitCode=0 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.778861 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75ca3e57-fb44-41a7-8ce4-a88f81a418a7","Type":"ContainerDied","Data":"d9bde9784d3f5bc32d7f2e5df6f17cac55342325149ffc02f5805bd6f6a7c95e"} Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.780931 5049 generic.go:334] "Generic (PLEG): container finished" podID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerID="621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49" exitCode=143 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.780989 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c03ce4e8-c3ee-491f-834a-1bceadb58aa9","Type":"ContainerDied","Data":"621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49"} Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.783502 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g44vx" event={"ID":"fd20540d-c693-4e71-aa96-839bd95201d5","Type":"ContainerStarted","Data":"9a59153c3ce52333c9edd66e09b42c2e0f9aa3d9080fdbe88f50d34317ebdaaa"} Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.785898 5049 generic.go:334] "Generic (PLEG): container finished" podID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerID="c6d6d842f96dba07f7ccce130d35b8a83146b1f258fd9547e5e3a427a0181164" exitCode=143 Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.785931 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"821290a0-bf1f-4ad2-b28c-b3065d704a40","Type":"ContainerDied","Data":"c6d6d842f96dba07f7ccce130d35b8a83146b1f258fd9547e5e3a427a0181164"} Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.805911 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 18:33:44 crc kubenswrapper[5049]: I0127 18:33:44.808576 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g44vx" podStartSLOduration=4.011393208 podStartE2EDuration="6.808556231s" podCreationTimestamp="2026-01-27 18:33:38 +0000 UTC" firstStartedPulling="2026-01-27 18:33:40.654909699 +0000 UTC m=+5795.753883248" lastFinishedPulling="2026-01-27 18:33:43.452072722 +0000 UTC m=+5798.551046271" observedRunningTime="2026-01-27 18:33:44.802039437 +0000 UTC m=+5799.901012976" watchObservedRunningTime="2026-01-27 18:33:44.808556231 +0000 UTC m=+5799.907529780" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.078999 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="75ca3e57-fb44-41a7-8ce4-a88f81a418a7" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.1.66:6080/vnc_lite.html\": dial tcp 10.217.1.66:6080: connect: connection refused" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.558526 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.637950 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-config-data\") pod \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.638045 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55ndg\" (UniqueName: \"kubernetes.io/projected/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-kube-api-access-55ndg\") pod \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.638082 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-combined-ca-bundle\") pod \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\" (UID: \"75ca3e57-fb44-41a7-8ce4-a88f81a418a7\") " Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.649215 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-kube-api-access-55ndg" (OuterVolumeSpecName: "kube-api-access-55ndg") pod "75ca3e57-fb44-41a7-8ce4-a88f81a418a7" (UID: "75ca3e57-fb44-41a7-8ce4-a88f81a418a7"). InnerVolumeSpecName "kube-api-access-55ndg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.681249 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75ca3e57-fb44-41a7-8ce4-a88f81a418a7" (UID: "75ca3e57-fb44-41a7-8ce4-a88f81a418a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.694942 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-config-data" (OuterVolumeSpecName: "config-data") pod "75ca3e57-fb44-41a7-8ce4-a88f81a418a7" (UID: "75ca3e57-fb44-41a7-8ce4-a88f81a418a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.740510 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.740543 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55ndg\" (UniqueName: \"kubernetes.io/projected/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-kube-api-access-55ndg\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.740559 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75ca3e57-fb44-41a7-8ce4-a88f81a418a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.799585 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.800319 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"75ca3e57-fb44-41a7-8ce4-a88f81a418a7","Type":"ContainerDied","Data":"1fabe3b823cd8724613649c26f0800ff1892610f506e10e72b0a00c94fec7180"} Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.800379 5049 scope.go:117] "RemoveContainer" containerID="d9bde9784d3f5bc32d7f2e5df6f17cac55342325149ffc02f5805bd6f6a7c95e" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.832756 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.853858 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.865635 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:33:45 crc kubenswrapper[5049]: E0127 18:33:45.866128 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" containerName="dnsmasq-dns" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.866148 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" containerName="dnsmasq-dns" Jan 27 18:33:45 crc kubenswrapper[5049]: E0127 18:33:45.866165 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" containerName="init" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.866171 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" containerName="init" Jan 27 18:33:45 crc kubenswrapper[5049]: E0127 18:33:45.866186 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ca3e57-fb44-41a7-8ce4-a88f81a418a7" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.866191 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ca3e57-fb44-41a7-8ce4-a88f81a418a7" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.866354 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0a48bc-ddff-4c70-b9bf-18bcab398d7c" containerName="dnsmasq-dns" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.866371 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ca3e57-fb44-41a7-8ce4-a88f81a418a7" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.866996 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.876177 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 18:33:45 crc kubenswrapper[5049]: I0127 18:33:45.881427 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.047189 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnzk5\" (UniqueName: \"kubernetes.io/projected/0a36975d-06df-4a40-9e9a-e14b781ee58f-kube-api-access-mnzk5\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.047845 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a36975d-06df-4a40-9e9a-e14b781ee58f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.047947 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36975d-06df-4a40-9e9a-e14b781ee58f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.150934 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a36975d-06df-4a40-9e9a-e14b781ee58f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.151301 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36975d-06df-4a40-9e9a-e14b781ee58f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.151487 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnzk5\" (UniqueName: \"kubernetes.io/projected/0a36975d-06df-4a40-9e9a-e14b781ee58f-kube-api-access-mnzk5\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.156329 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a36975d-06df-4a40-9e9a-e14b781ee58f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.163912 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a36975d-06df-4a40-9e9a-e14b781ee58f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.167443 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnzk5\" (UniqueName: \"kubernetes.io/projected/0a36975d-06df-4a40-9e9a-e14b781ee58f-kube-api-access-mnzk5\") pod \"nova-cell1-novncproxy-0\" (UID: \"0a36975d-06df-4a40-9e9a-e14b781ee58f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.225176 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:46 crc kubenswrapper[5049]: W0127 18:33:46.693793 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a36975d_06df_4a40_9e9a_e14b781ee58f.slice/crio-261d7d5d8676cddde09c0b37e1d1fa436f4de264f778c138dab868c40121663c WatchSource:0}: Error finding container 261d7d5d8676cddde09c0b37e1d1fa436f4de264f778c138dab868c40121663c: Status 404 returned error can't find the container with id 261d7d5d8676cddde09c0b37e1d1fa436f4de264f778c138dab868c40121663c Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.698399 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 18:33:46 crc kubenswrapper[5049]: I0127 18:33:46.808626 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0a36975d-06df-4a40-9e9a-e14b781ee58f","Type":"ContainerStarted","Data":"261d7d5d8676cddde09c0b37e1d1fa436f4de264f778c138dab868c40121663c"} Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.332290 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.332852 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="04f936b5-5271-4bdb-89aa-bcbcc6e526ec" containerName="nova-cell0-conductor-conductor" containerID="cri-o://3b694ea58e80266f34b80393c7c0158d3e8e1d97ffb71542ce50980dcf5b12b0" gracePeriod=30 Jan 27 18:33:47 crc kubenswrapper[5049]: E0127 18:33:47.422075 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:33:47 crc kubenswrapper[5049]: E0127 18:33:47.428050 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:33:47 crc kubenswrapper[5049]: E0127 18:33:47.434820 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 18:33:47 crc kubenswrapper[5049]: E0127 18:33:47.434906 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" containerName="nova-scheduler-scheduler" Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.658070 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ca3e57-fb44-41a7-8ce4-a88f81a418a7" path="/var/lib/kubelet/pods/75ca3e57-fb44-41a7-8ce4-a88f81a418a7/volumes" Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.783273 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.783842 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.783881 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.784583 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f0c85dbb74448a363ed0be73b30a973046d8019ad413c4249fdaeac7a5b4439"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.784648 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://0f0c85dbb74448a363ed0be73b30a973046d8019ad413c4249fdaeac7a5b4439" gracePeriod=600 Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.929006 5049 generic.go:334] "Generic (PLEG): container finished" podID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerID="39cb2cd5dcc52237a8bf7e4e1e70534d480040bef6b39acbe2d26693236a1c7a" exitCode=0 Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.929118 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"821290a0-bf1f-4ad2-b28c-b3065d704a40","Type":"ContainerDied","Data":"39cb2cd5dcc52237a8bf7e4e1e70534d480040bef6b39acbe2d26693236a1c7a"} Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.932098 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.937979 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0a36975d-06df-4a40-9e9a-e14b781ee58f","Type":"ContainerStarted","Data":"b319082e3c98c840a6d6972dc140ee892517db233c12fbd4e8d044c5b2e7b4ec"} Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.941323 5049 generic.go:334] "Generic (PLEG): container finished" podID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerID="1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098" exitCode=0 Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.941471 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c03ce4e8-c3ee-491f-834a-1bceadb58aa9","Type":"ContainerDied","Data":"1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098"} Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.941551 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c03ce4e8-c3ee-491f-834a-1bceadb58aa9","Type":"ContainerDied","Data":"d1baf755322455d5d53d4fc785cec0a4be865b5b69778d2ccb607fd39c5b26e0"} Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.941627 5049 scope.go:117] "RemoveContainer" containerID="1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098" Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.941808 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.960352 5049 generic.go:334] "Generic (PLEG): container finished" podID="e3a94daa-7472-434d-9137-fe254ac3027e" containerID="cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4" exitCode=0 Jan 27 18:33:47 crc kubenswrapper[5049]: I0127 18:33:47.960397 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e3a94daa-7472-434d-9137-fe254ac3027e","Type":"ContainerDied","Data":"cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4"} Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.023833 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.023813128 podStartE2EDuration="3.023813128s" podCreationTimestamp="2026-01-27 18:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:48.023236282 +0000 UTC m=+5803.122209841" watchObservedRunningTime="2026-01-27 18:33:48.023813128 +0000 UTC m=+5803.122786667" Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.080247 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4 is running failed: container process not found" containerID="cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.080514 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4 is running failed: container process not found" containerID="cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.080934 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4 is running failed: container process not found" containerID="cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.080960 5049 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="e3a94daa-7472-434d-9137-fe254ac3027e" containerName="nova-cell1-conductor-conductor" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.123736 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6lk2\" (UniqueName: \"kubernetes.io/projected/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-kube-api-access-z6lk2\") pod \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.123968 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-combined-ca-bundle\") pod \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.124004 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-logs\") pod \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.124082 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-config-data\") pod \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\" (UID: \"c03ce4e8-c3ee-491f-834a-1bceadb58aa9\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.136468 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-logs" (OuterVolumeSpecName: "logs") pod "c03ce4e8-c3ee-491f-834a-1bceadb58aa9" (UID: "c03ce4e8-c3ee-491f-834a-1bceadb58aa9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.136981 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-kube-api-access-z6lk2" (OuterVolumeSpecName: "kube-api-access-z6lk2") pod "c03ce4e8-c3ee-491f-834a-1bceadb58aa9" (UID: "c03ce4e8-c3ee-491f-834a-1bceadb58aa9"). InnerVolumeSpecName "kube-api-access-z6lk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.172994 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c03ce4e8-c3ee-491f-834a-1bceadb58aa9" (UID: "c03ce4e8-c3ee-491f-834a-1bceadb58aa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.216472 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-config-data" (OuterVolumeSpecName: "config-data") pod "c03ce4e8-c3ee-491f-834a-1bceadb58aa9" (UID: "c03ce4e8-c3ee-491f-834a-1bceadb58aa9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.230572 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6lk2\" (UniqueName: \"kubernetes.io/projected/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-kube-api-access-z6lk2\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.230607 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.230617 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.230633 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c03ce4e8-c3ee-491f-834a-1bceadb58aa9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.239050 5049 scope.go:117] "RemoveContainer" containerID="621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.253370 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.281880 5049 scope.go:117] "RemoveContainer" containerID="1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098" Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.285968 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098\": container with ID starting with 1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098 not found: ID does not exist" containerID="1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.286016 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098"} err="failed to get container status \"1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098\": rpc error: code = NotFound desc = could not find container \"1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098\": container with ID starting with 1849e002e502ad61c6b853bd3a4fdc70c1bba2e5ce74848e75fefbc200ebd098 not found: ID does not exist" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.286041 5049 scope.go:117] "RemoveContainer" containerID="621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49" Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.286579 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49\": container with ID starting with 621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49 not found: ID does not exist" containerID="621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.286606 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49"} err="failed to get container status \"621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49\": rpc error: code = NotFound desc = could not find container \"621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49\": container with ID starting with 621d6abb816c47c454d53ba3881659523c3a47c80d80e720f068017b7daeef49 not found: ID does not exist" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.302713 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.320359 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.343763 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.352742 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.353191 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-log" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353210 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-log" Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.353224 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a94daa-7472-434d-9137-fe254ac3027e" containerName="nova-cell1-conductor-conductor" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353232 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a94daa-7472-434d-9137-fe254ac3027e" containerName="nova-cell1-conductor-conductor" Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.353256 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-api" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353264 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-api" Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.353278 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-log" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353285 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-log" Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.353310 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-metadata" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353316 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-metadata" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353497 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a94daa-7472-434d-9137-fe254ac3027e" containerName="nova-cell1-conductor-conductor" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353518 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-api" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353533 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" containerName="nova-api-log" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353544 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-metadata" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.353554 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" containerName="nova-metadata-log" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.354662 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.366331 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.372752 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.435609 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821290a0-bf1f-4ad2-b28c-b3065d704a40-logs\") pod \"821290a0-bf1f-4ad2-b28c-b3065d704a40\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.436065 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-config-data\") pod \"e3a94daa-7472-434d-9137-fe254ac3027e\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.436427 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/821290a0-bf1f-4ad2-b28c-b3065d704a40-logs" (OuterVolumeSpecName: "logs") pod "821290a0-bf1f-4ad2-b28c-b3065d704a40" (UID: "821290a0-bf1f-4ad2-b28c-b3065d704a40"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.437442 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-config-data\") pod \"821290a0-bf1f-4ad2-b28c-b3065d704a40\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.437662 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dnnn\" (UniqueName: \"kubernetes.io/projected/e3a94daa-7472-434d-9137-fe254ac3027e-kube-api-access-5dnnn\") pod \"e3a94daa-7472-434d-9137-fe254ac3027e\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.437836 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcph7\" (UniqueName: \"kubernetes.io/projected/821290a0-bf1f-4ad2-b28c-b3065d704a40-kube-api-access-bcph7\") pod \"821290a0-bf1f-4ad2-b28c-b3065d704a40\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.438055 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-combined-ca-bundle\") pod \"e3a94daa-7472-434d-9137-fe254ac3027e\" (UID: \"e3a94daa-7472-434d-9137-fe254ac3027e\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.438226 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-combined-ca-bundle\") pod \"821290a0-bf1f-4ad2-b28c-b3065d704a40\" (UID: \"821290a0-bf1f-4ad2-b28c-b3065d704a40\") " Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.438973 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/821290a0-bf1f-4ad2-b28c-b3065d704a40-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.451034 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821290a0-bf1f-4ad2-b28c-b3065d704a40-kube-api-access-bcph7" (OuterVolumeSpecName: "kube-api-access-bcph7") pod "821290a0-bf1f-4ad2-b28c-b3065d704a40" (UID: "821290a0-bf1f-4ad2-b28c-b3065d704a40"). InnerVolumeSpecName "kube-api-access-bcph7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.462439 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a94daa-7472-434d-9137-fe254ac3027e-kube-api-access-5dnnn" (OuterVolumeSpecName: "kube-api-access-5dnnn") pod "e3a94daa-7472-434d-9137-fe254ac3027e" (UID: "e3a94daa-7472-434d-9137-fe254ac3027e"). InnerVolumeSpecName "kube-api-access-5dnnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.477894 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "821290a0-bf1f-4ad2-b28c-b3065d704a40" (UID: "821290a0-bf1f-4ad2-b28c-b3065d704a40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.482821 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3a94daa-7472-434d-9137-fe254ac3027e" (UID: "e3a94daa-7472-434d-9137-fe254ac3027e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.503841 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-config-data" (OuterVolumeSpecName: "config-data") pod "821290a0-bf1f-4ad2-b28c-b3065d704a40" (UID: "821290a0-bf1f-4ad2-b28c-b3065d704a40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.531899 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-config-data" (OuterVolumeSpecName: "config-data") pod "e3a94daa-7472-434d-9137-fe254ac3027e" (UID: "e3a94daa-7472-434d-9137-fe254ac3027e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.542823 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32c6dea-a530-4aae-91fc-e4de8443aadf-logs\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.542871 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32c6dea-a530-4aae-91fc-e4de8443aadf-config-data\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.543132 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrcld\" (UniqueName: \"kubernetes.io/projected/a32c6dea-a530-4aae-91fc-e4de8443aadf-kube-api-access-zrcld\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.543390 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32c6dea-a530-4aae-91fc-e4de8443aadf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.543535 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.543559 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.543572 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/821290a0-bf1f-4ad2-b28c-b3065d704a40-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.543587 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dnnn\" (UniqueName: \"kubernetes.io/projected/e3a94daa-7472-434d-9137-fe254ac3027e-kube-api-access-5dnnn\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.543599 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcph7\" (UniqueName: \"kubernetes.io/projected/821290a0-bf1f-4ad2-b28c-b3065d704a40-kube-api-access-bcph7\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.543611 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a94daa-7472-434d-9137-fe254ac3027e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.644902 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrcld\" (UniqueName: \"kubernetes.io/projected/a32c6dea-a530-4aae-91fc-e4de8443aadf-kube-api-access-zrcld\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.645010 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32c6dea-a530-4aae-91fc-e4de8443aadf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.645066 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32c6dea-a530-4aae-91fc-e4de8443aadf-logs\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.645088 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32c6dea-a530-4aae-91fc-e4de8443aadf-config-data\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.647990 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a32c6dea-a530-4aae-91fc-e4de8443aadf-logs\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.650560 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a32c6dea-a530-4aae-91fc-e4de8443aadf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.663293 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a32c6dea-a530-4aae-91fc-e4de8443aadf-config-data\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.664795 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrcld\" (UniqueName: \"kubernetes.io/projected/a32c6dea-a530-4aae-91fc-e4de8443aadf-kube-api-access-zrcld\") pod \"nova-metadata-0\" (UID: \"a32c6dea-a530-4aae-91fc-e4de8443aadf\") " pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.693905 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.778504 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b694ea58e80266f34b80393c7c0158d3e8e1d97ffb71542ce50980dcf5b12b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.780189 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b694ea58e80266f34b80393c7c0158d3e8e1d97ffb71542ce50980dcf5b12b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.787492 5049 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b694ea58e80266f34b80393c7c0158d3e8e1d97ffb71542ce50980dcf5b12b0" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 27 18:33:48 crc kubenswrapper[5049]: E0127 18:33:48.787921 5049 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="04f936b5-5271-4bdb-89aa-bcbcc6e526ec" containerName="nova-cell0-conductor-conductor" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.982142 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"821290a0-bf1f-4ad2-b28c-b3065d704a40","Type":"ContainerDied","Data":"ec7028a08fe31e6ab13503e5669831ad2424eb41606a74a940c63290629d5cdb"} Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.982204 5049 scope.go:117] "RemoveContainer" containerID="39cb2cd5dcc52237a8bf7e4e1e70534d480040bef6b39acbe2d26693236a1c7a" Jan 27 18:33:48 crc kubenswrapper[5049]: I0127 18:33:48.982726 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.004059 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e3a94daa-7472-434d-9137-fe254ac3027e","Type":"ContainerDied","Data":"04dffe057c074c6e1cfaa964a5987d1ab214c2f4909081fbafb4cc0d5db2c4c9"} Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.004153 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.028021 5049 scope.go:117] "RemoveContainer" containerID="c6d6d842f96dba07f7ccce130d35b8a83146b1f258fd9547e5e3a427a0181164" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.046814 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="0f0c85dbb74448a363ed0be73b30a973046d8019ad413c4249fdaeac7a5b4439" exitCode=0 Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.047793 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"0f0c85dbb74448a363ed0be73b30a973046d8019ad413c4249fdaeac7a5b4439"} Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.047833 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8"} Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.076615 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.081485 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.096993 5049 scope.go:117] "RemoveContainer" containerID="cc4c6e38906afcfcbc86761782e6b32a64c85cfc810063d7755b1272fdf11ae4" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.103809 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.122670 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.125058 5049 scope.go:117] "RemoveContainer" containerID="4d90fe58d32dc9f12aa3265d7d5d34cbb2ce44000de03b49cfbe05772fdda192" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.139764 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.141471 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.145240 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.149098 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.150610 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.155674 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.160789 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.179302 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.179976 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.190225 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.245193 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.249952 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.257672 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b88b020-f951-4293-9659-b10b64dd2aad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.257732 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b362a6-4c77-4063-aeb1-4884ef4eaf46-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.257783 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvxx\" (UniqueName: \"kubernetes.io/projected/6b88b020-f951-4293-9659-b10b64dd2aad-kube-api-access-6cvxx\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.257861 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b88b020-f951-4293-9659-b10b64dd2aad-config-data\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.257914 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b362a6-4c77-4063-aeb1-4884ef4eaf46-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.257930 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svjpd\" (UniqueName: \"kubernetes.io/projected/69b362a6-4c77-4063-aeb1-4884ef4eaf46-kube-api-access-svjpd\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.257951 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b88b020-f951-4293-9659-b10b64dd2aad-logs\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.359731 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b88b020-f951-4293-9659-b10b64dd2aad-config-data\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.359815 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b362a6-4c77-4063-aeb1-4884ef4eaf46-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.359848 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svjpd\" (UniqueName: \"kubernetes.io/projected/69b362a6-4c77-4063-aeb1-4884ef4eaf46-kube-api-access-svjpd\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.359879 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b88b020-f951-4293-9659-b10b64dd2aad-logs\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.359941 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b88b020-f951-4293-9659-b10b64dd2aad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.359982 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b362a6-4c77-4063-aeb1-4884ef4eaf46-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.360041 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cvxx\" (UniqueName: \"kubernetes.io/projected/6b88b020-f951-4293-9659-b10b64dd2aad-kube-api-access-6cvxx\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.361115 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b88b020-f951-4293-9659-b10b64dd2aad-logs\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.368463 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b88b020-f951-4293-9659-b10b64dd2aad-config-data\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.377181 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b88b020-f951-4293-9659-b10b64dd2aad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.380418 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69b362a6-4c77-4063-aeb1-4884ef4eaf46-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.381523 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69b362a6-4c77-4063-aeb1-4884ef4eaf46-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.385323 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svjpd\" (UniqueName: \"kubernetes.io/projected/69b362a6-4c77-4063-aeb1-4884ef4eaf46-kube-api-access-svjpd\") pod \"nova-cell1-conductor-0\" (UID: \"69b362a6-4c77-4063-aeb1-4884ef4eaf46\") " pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.387197 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cvxx\" (UniqueName: \"kubernetes.io/projected/6b88b020-f951-4293-9659-b10b64dd2aad-kube-api-access-6cvxx\") pod \"nova-api-0\" (UID: \"6b88b020-f951-4293-9659-b10b64dd2aad\") " pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.471331 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.504065 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.659440 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821290a0-bf1f-4ad2-b28c-b3065d704a40" path="/var/lib/kubelet/pods/821290a0-bf1f-4ad2-b28c-b3065d704a40/volumes" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.660222 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ce4e8-c3ee-491f-834a-1bceadb58aa9" path="/var/lib/kubelet/pods/c03ce4e8-c3ee-491f-834a-1bceadb58aa9/volumes" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.661045 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a94daa-7472-434d-9137-fe254ac3027e" path="/var/lib/kubelet/pods/e3a94daa-7472-434d-9137-fe254ac3027e/volumes" Jan 27 18:33:49 crc kubenswrapper[5049]: I0127 18:33:49.905950 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 18:33:49 crc kubenswrapper[5049]: W0127 18:33:49.913592 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69b362a6_4c77_4063_aeb1_4884ef4eaf46.slice/crio-3d5ea4e9d921c1e3ed7a404e300893013f0d09d078e99015873be9240d86d41a WatchSource:0}: Error finding container 3d5ea4e9d921c1e3ed7a404e300893013f0d09d078e99015873be9240d86d41a: Status 404 returned error can't find the container with id 3d5ea4e9d921c1e3ed7a404e300893013f0d09d078e99015873be9240d86d41a Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.013996 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.075423 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b88b020-f951-4293-9659-b10b64dd2aad","Type":"ContainerStarted","Data":"1f6b872f6bbcc53568d2281941e0d94ee4858d3128961758c7b3a332e1bc9aec"} Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.077078 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"69b362a6-4c77-4063-aeb1-4884ef4eaf46","Type":"ContainerStarted","Data":"3d5ea4e9d921c1e3ed7a404e300893013f0d09d078e99015873be9240d86d41a"} Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.079467 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a32c6dea-a530-4aae-91fc-e4de8443aadf","Type":"ContainerStarted","Data":"db8e409701773eedbb91c903d82ebf48d744dc2eaa14885aaa05a088583196ca"} Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.079491 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a32c6dea-a530-4aae-91fc-e4de8443aadf","Type":"ContainerStarted","Data":"184a3627706aab72485095d72d458dfb9af11cfdd8a34f8ff9f9a0ba657162a1"} Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.079501 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a32c6dea-a530-4aae-91fc-e4de8443aadf","Type":"ContainerStarted","Data":"ebf2a96142e5b7ed16836bb526d46c7635daad081829a6dd0c840f0d6ccd013e"} Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.111397 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.111376117 podStartE2EDuration="2.111376117s" podCreationTimestamp="2026-01-27 18:33:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:50.106973083 +0000 UTC m=+5805.205946632" watchObservedRunningTime="2026-01-27 18:33:50.111376117 +0000 UTC m=+5805.210349656" Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.148897 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:50 crc kubenswrapper[5049]: I0127 18:33:50.227166 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g44vx"] Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.102846 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b88b020-f951-4293-9659-b10b64dd2aad","Type":"ContainerStarted","Data":"5727f067102d5fc8c29afb30a106c3b43d76e2604338d66fae1453f60ead44ae"} Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.103438 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b88b020-f951-4293-9659-b10b64dd2aad","Type":"ContainerStarted","Data":"32e283411f5ad42afafb1551f126e41d3ffc57be514e5bd8df1690f85543ba0e"} Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.115735 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"69b362a6-4c77-4063-aeb1-4884ef4eaf46","Type":"ContainerStarted","Data":"0219dc0b927e8bba6367836affc7d13389cf37807768009a021938b92f03e397"} Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.151442 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.151417501 podStartE2EDuration="2.151417501s" podCreationTimestamp="2026-01-27 18:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:51.143890079 +0000 UTC m=+5806.242863628" watchObservedRunningTime="2026-01-27 18:33:51.151417501 +0000 UTC m=+5806.250391050" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.172866 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.172842596 podStartE2EDuration="2.172842596s" podCreationTimestamp="2026-01-27 18:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:51.165730135 +0000 UTC m=+5806.264703684" watchObservedRunningTime="2026-01-27 18:33:51.172842596 +0000 UTC m=+5806.271816145" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.225733 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.634408 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.715965 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-combined-ca-bundle\") pod \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.716129 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-config-data\") pod \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.716274 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5mjq\" (UniqueName: \"kubernetes.io/projected/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-kube-api-access-f5mjq\") pod \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\" (UID: \"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d\") " Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.732031 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-kube-api-access-f5mjq" (OuterVolumeSpecName: "kube-api-access-f5mjq") pod "f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" (UID: "f1016fc2-6824-4d3e-a31b-d4dd3617cc4d"). InnerVolumeSpecName "kube-api-access-f5mjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.750196 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" (UID: "f1016fc2-6824-4d3e-a31b-d4dd3617cc4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.759016 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-config-data" (OuterVolumeSpecName: "config-data") pod "f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" (UID: "f1016fc2-6824-4d3e-a31b-d4dd3617cc4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.820819 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.820874 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:51 crc kubenswrapper[5049]: I0127 18:33:51.820889 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5mjq\" (UniqueName: \"kubernetes.io/projected/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d-kube-api-access-f5mjq\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.126169 5049 generic.go:334] "Generic (PLEG): container finished" podID="f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" containerID="fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47" exitCode=0 Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.126209 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d","Type":"ContainerDied","Data":"fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47"} Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.126252 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f1016fc2-6824-4d3e-a31b-d4dd3617cc4d","Type":"ContainerDied","Data":"a4d6f98353487cfc1ed0218036e8ac7bff16e6b6d8e25bc0ddf5cc53b3b88ebd"} Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.126267 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.126275 5049 scope.go:117] "RemoveContainer" containerID="fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.126377 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.127291 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g44vx" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" containerName="registry-server" containerID="cri-o://9a59153c3ce52333c9edd66e09b42c2e0f9aa3d9080fdbe88f50d34317ebdaaa" gracePeriod=2 Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.163397 5049 scope.go:117] "RemoveContainer" containerID="fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47" Jan 27 18:33:52 crc kubenswrapper[5049]: E0127 18:33:52.163753 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47\": container with ID starting with fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47 not found: ID does not exist" containerID="fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.163785 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47"} err="failed to get container status \"fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47\": rpc error: code = NotFound desc = could not find container \"fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47\": container with ID starting with fd44815e2efcd67c7b044b5c126027e5a635bf6ccaea81840e84495865285f47 not found: ID does not exist" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.177758 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.188279 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.199809 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:33:52 crc kubenswrapper[5049]: E0127 18:33:52.200305 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" containerName="nova-scheduler-scheduler" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.200329 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" containerName="nova-scheduler-scheduler" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.200552 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" containerName="nova-scheduler-scheduler" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.201383 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.203727 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.217200 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.227798 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdwt\" (UniqueName: \"kubernetes.io/projected/b5f3f019-3ad3-4602-9dda-409b8370843b-kube-api-access-rtdwt\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.227940 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f3f019-3ad3-4602-9dda-409b8370843b-config-data\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.228041 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f3f019-3ad3-4602-9dda-409b8370843b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.329401 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f3f019-3ad3-4602-9dda-409b8370843b-config-data\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.329502 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f3f019-3ad3-4602-9dda-409b8370843b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.329593 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdwt\" (UniqueName: \"kubernetes.io/projected/b5f3f019-3ad3-4602-9dda-409b8370843b-kube-api-access-rtdwt\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.334529 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f3f019-3ad3-4602-9dda-409b8370843b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.335493 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5f3f019-3ad3-4602-9dda-409b8370843b-config-data\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.348911 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdwt\" (UniqueName: \"kubernetes.io/projected/b5f3f019-3ad3-4602-9dda-409b8370843b-kube-api-access-rtdwt\") pod \"nova-scheduler-0\" (UID: \"b5f3f019-3ad3-4602-9dda-409b8370843b\") " pod="openstack/nova-scheduler-0" Jan 27 18:33:52 crc kubenswrapper[5049]: I0127 18:33:52.621381 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.125411 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.161307 5049 generic.go:334] "Generic (PLEG): container finished" podID="04f936b5-5271-4bdb-89aa-bcbcc6e526ec" containerID="3b694ea58e80266f34b80393c7c0158d3e8e1d97ffb71542ce50980dcf5b12b0" exitCode=0 Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.161713 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04f936b5-5271-4bdb-89aa-bcbcc6e526ec","Type":"ContainerDied","Data":"3b694ea58e80266f34b80393c7c0158d3e8e1d97ffb71542ce50980dcf5b12b0"} Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.167488 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b5f3f019-3ad3-4602-9dda-409b8370843b","Type":"ContainerStarted","Data":"a8b8ccd847327f80a4eede6b7f4fe044e374aaa39a0fd88eda7db8e6d79fae92"} Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.183196 5049 generic.go:334] "Generic (PLEG): container finished" podID="fd20540d-c693-4e71-aa96-839bd95201d5" containerID="9a59153c3ce52333c9edd66e09b42c2e0f9aa3d9080fdbe88f50d34317ebdaaa" exitCode=0 Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.183388 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g44vx" event={"ID":"fd20540d-c693-4e71-aa96-839bd95201d5","Type":"ContainerDied","Data":"9a59153c3ce52333c9edd66e09b42c2e0f9aa3d9080fdbe88f50d34317ebdaaa"} Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.362316 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.413219 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.453742 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-utilities\") pod \"fd20540d-c693-4e71-aa96-839bd95201d5\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.453858 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-catalog-content\") pod \"fd20540d-c693-4e71-aa96-839bd95201d5\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.453911 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-combined-ca-bundle\") pod \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.453993 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-config-data\") pod \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.454036 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h862f\" (UniqueName: \"kubernetes.io/projected/fd20540d-c693-4e71-aa96-839bd95201d5-kube-api-access-h862f\") pod \"fd20540d-c693-4e71-aa96-839bd95201d5\" (UID: \"fd20540d-c693-4e71-aa96-839bd95201d5\") " Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.454067 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgsd2\" (UniqueName: \"kubernetes.io/projected/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-kube-api-access-dgsd2\") pod \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\" (UID: \"04f936b5-5271-4bdb-89aa-bcbcc6e526ec\") " Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.458135 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-utilities" (OuterVolumeSpecName: "utilities") pod "fd20540d-c693-4e71-aa96-839bd95201d5" (UID: "fd20540d-c693-4e71-aa96-839bd95201d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.463402 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd20540d-c693-4e71-aa96-839bd95201d5-kube-api-access-h862f" (OuterVolumeSpecName: "kube-api-access-h862f") pod "fd20540d-c693-4e71-aa96-839bd95201d5" (UID: "fd20540d-c693-4e71-aa96-839bd95201d5"). InnerVolumeSpecName "kube-api-access-h862f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.463469 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-kube-api-access-dgsd2" (OuterVolumeSpecName: "kube-api-access-dgsd2") pod "04f936b5-5271-4bdb-89aa-bcbcc6e526ec" (UID: "04f936b5-5271-4bdb-89aa-bcbcc6e526ec"). InnerVolumeSpecName "kube-api-access-dgsd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.494811 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04f936b5-5271-4bdb-89aa-bcbcc6e526ec" (UID: "04f936b5-5271-4bdb-89aa-bcbcc6e526ec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.504304 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-config-data" (OuterVolumeSpecName: "config-data") pod "04f936b5-5271-4bdb-89aa-bcbcc6e526ec" (UID: "04f936b5-5271-4bdb-89aa-bcbcc6e526ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.516288 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd20540d-c693-4e71-aa96-839bd95201d5" (UID: "fd20540d-c693-4e71-aa96-839bd95201d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.556859 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.556898 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h862f\" (UniqueName: \"kubernetes.io/projected/fd20540d-c693-4e71-aa96-839bd95201d5-kube-api-access-h862f\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.556910 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgsd2\" (UniqueName: \"kubernetes.io/projected/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-kube-api-access-dgsd2\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.556919 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.556927 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd20540d-c693-4e71-aa96-839bd95201d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.556938 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f936b5-5271-4bdb-89aa-bcbcc6e526ec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.656192 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1016fc2-6824-4d3e-a31b-d4dd3617cc4d" path="/var/lib/kubelet/pods/f1016fc2-6824-4d3e-a31b-d4dd3617cc4d/volumes" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.694838 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:33:53 crc kubenswrapper[5049]: I0127 18:33:53.694970 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.194726 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g44vx" event={"ID":"fd20540d-c693-4e71-aa96-839bd95201d5","Type":"ContainerDied","Data":"3be98ef3327fc01fc65b35721fbf3d7814284de287848dbbe44cebcf6987e51e"} Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.194783 5049 scope.go:117] "RemoveContainer" containerID="9a59153c3ce52333c9edd66e09b42c2e0f9aa3d9080fdbe88f50d34317ebdaaa" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.194941 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g44vx" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.198623 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"04f936b5-5271-4bdb-89aa-bcbcc6e526ec","Type":"ContainerDied","Data":"e58f5b6c23f2bd87815194a743ef1a2988b8e19faa39c489481d47d1d01d4eee"} Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.198728 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.209665 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b5f3f019-3ad3-4602-9dda-409b8370843b","Type":"ContainerStarted","Data":"fb313e998c6440c7ac73d487ef367a2d8f329fb755cab5a07da528f15f2320d1"} Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.220575 5049 scope.go:117] "RemoveContainer" containerID="5aa093e47c82341bfccb40a4eb7b18e6d2f40f977db993104fdd3cea733dfa1d" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.244978 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g44vx"] Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.248718 5049 scope.go:117] "RemoveContainer" containerID="4766049ae5ca5b17b3465a59e5b78186448ba5ae7760c92e5466dad6592604de" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.260784 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g44vx"] Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.274539 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.298261 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.301437 5049 scope.go:117] "RemoveContainer" containerID="3b694ea58e80266f34b80393c7c0158d3e8e1d97ffb71542ce50980dcf5b12b0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.309156 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:33:54 crc kubenswrapper[5049]: E0127 18:33:54.309585 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" containerName="extract-utilities" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.309604 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" containerName="extract-utilities" Jan 27 18:33:54 crc kubenswrapper[5049]: E0127 18:33:54.309620 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" containerName="registry-server" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.309627 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" containerName="registry-server" Jan 27 18:33:54 crc kubenswrapper[5049]: E0127 18:33:54.309646 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f936b5-5271-4bdb-89aa-bcbcc6e526ec" containerName="nova-cell0-conductor-conductor" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.309653 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f936b5-5271-4bdb-89aa-bcbcc6e526ec" containerName="nova-cell0-conductor-conductor" Jan 27 18:33:54 crc kubenswrapper[5049]: E0127 18:33:54.309704 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" containerName="extract-content" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.309714 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" containerName="extract-content" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.309938 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" containerName="registry-server" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.309956 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f936b5-5271-4bdb-89aa-bcbcc6e526ec" containerName="nova-cell0-conductor-conductor" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.310668 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.315215 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.322284 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.322259455 podStartE2EDuration="2.322259455s" podCreationTimestamp="2026-01-27 18:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:54.290240291 +0000 UTC m=+5809.389213850" watchObservedRunningTime="2026-01-27 18:33:54.322259455 +0000 UTC m=+5809.421233004" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.334742 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.378404 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzz5j\" (UniqueName: \"kubernetes.io/projected/831a6998-fc4d-44d4-bf18-e75f37c02c3e-kube-api-access-rzz5j\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.378743 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/831a6998-fc4d-44d4-bf18-e75f37c02c3e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.378843 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831a6998-fc4d-44d4-bf18-e75f37c02c3e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.479795 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831a6998-fc4d-44d4-bf18-e75f37c02c3e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.479850 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzz5j\" (UniqueName: \"kubernetes.io/projected/831a6998-fc4d-44d4-bf18-e75f37c02c3e-kube-api-access-rzz5j\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.479906 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/831a6998-fc4d-44d4-bf18-e75f37c02c3e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.484946 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/831a6998-fc4d-44d4-bf18-e75f37c02c3e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.485128 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/831a6998-fc4d-44d4-bf18-e75f37c02c3e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.496631 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzz5j\" (UniqueName: \"kubernetes.io/projected/831a6998-fc4d-44d4-bf18-e75f37c02c3e-kube-api-access-rzz5j\") pod \"nova-cell0-conductor-0\" (UID: \"831a6998-fc4d-44d4-bf18-e75f37c02c3e\") " pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:54 crc kubenswrapper[5049]: I0127 18:33:54.677717 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:55 crc kubenswrapper[5049]: I0127 18:33:55.122804 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 18:33:55 crc kubenswrapper[5049]: I0127 18:33:55.219322 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"831a6998-fc4d-44d4-bf18-e75f37c02c3e","Type":"ContainerStarted","Data":"829b1159dcbd875e9856049632d8261ca0200cb56bf18ff0517b72dba310830f"} Jan 27 18:33:55 crc kubenswrapper[5049]: I0127 18:33:55.658322 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f936b5-5271-4bdb-89aa-bcbcc6e526ec" path="/var/lib/kubelet/pods/04f936b5-5271-4bdb-89aa-bcbcc6e526ec/volumes" Jan 27 18:33:55 crc kubenswrapper[5049]: I0127 18:33:55.659167 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd20540d-c693-4e71-aa96-839bd95201d5" path="/var/lib/kubelet/pods/fd20540d-c693-4e71-aa96-839bd95201d5/volumes" Jan 27 18:33:56 crc kubenswrapper[5049]: I0127 18:33:56.226573 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:56 crc kubenswrapper[5049]: I0127 18:33:56.239528 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"831a6998-fc4d-44d4-bf18-e75f37c02c3e","Type":"ContainerStarted","Data":"9c48039861dfd9bb79d54d09f66fdd76a2a5c26d7e504e354ff0bd1d62fc756a"} Jan 27 18:33:56 crc kubenswrapper[5049]: I0127 18:33:56.240162 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 18:33:56 crc kubenswrapper[5049]: I0127 18:33:56.253717 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:56 crc kubenswrapper[5049]: I0127 18:33:56.268825 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.268805052 podStartE2EDuration="2.268805052s" podCreationTimestamp="2026-01-27 18:33:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:33:56.262119133 +0000 UTC m=+5811.361092682" watchObservedRunningTime="2026-01-27 18:33:56.268805052 +0000 UTC m=+5811.367778601" Jan 27 18:33:57 crc kubenswrapper[5049]: I0127 18:33:57.261231 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 18:33:57 crc kubenswrapper[5049]: I0127 18:33:57.622871 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 18:33:58 crc kubenswrapper[5049]: I0127 18:33:58.697069 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:33:58 crc kubenswrapper[5049]: I0127 18:33:58.697353 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 18:33:59 crc kubenswrapper[5049]: I0127 18:33:59.471718 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:33:59 crc kubenswrapper[5049]: I0127 18:33:59.471782 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 18:33:59 crc kubenswrapper[5049]: I0127 18:33:59.535091 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 18:33:59 crc kubenswrapper[5049]: I0127 18:33:59.738089 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a32c6dea-a530-4aae-91fc-e4de8443aadf" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.85:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:33:59 crc kubenswrapper[5049]: I0127 18:33:59.780940 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a32c6dea-a530-4aae-91fc-e4de8443aadf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.85:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:34:00 crc kubenswrapper[5049]: I0127 18:34:00.553885 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6b88b020-f951-4293-9659-b10b64dd2aad" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:34:00 crc kubenswrapper[5049]: I0127 18:34:00.554271 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6b88b020-f951-4293-9659-b10b64dd2aad" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.86:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 18:34:02 crc kubenswrapper[5049]: I0127 18:34:02.622384 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 18:34:02 crc kubenswrapper[5049]: I0127 18:34:02.652407 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.330225 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.580639 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.582588 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.608597 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.627846 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.694817 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc385a2-1649-4e2b-943a-144ac4722ce9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.694895 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.694961 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.695188 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml49m\" (UniqueName: \"kubernetes.io/projected/ffc385a2-1649-4e2b-943a-144ac4722ce9-kube-api-access-ml49m\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.695515 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.695557 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-scripts\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.796897 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.796975 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml49m\" (UniqueName: \"kubernetes.io/projected/ffc385a2-1649-4e2b-943a-144ac4722ce9-kube-api-access-ml49m\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.797076 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.797097 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-scripts\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.797111 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc385a2-1649-4e2b-943a-144ac4722ce9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.797180 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.797243 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc385a2-1649-4e2b-943a-144ac4722ce9-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.802348 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-scripts\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.802386 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.803201 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.805633 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.814640 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml49m\" (UniqueName: \"kubernetes.io/projected/ffc385a2-1649-4e2b-943a-144ac4722ce9-kube-api-access-ml49m\") pod \"cinder-scheduler-0\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:03 crc kubenswrapper[5049]: I0127 18:34:03.923356 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:34:04 crc kubenswrapper[5049]: I0127 18:34:04.383091 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:04 crc kubenswrapper[5049]: W0127 18:34:04.387069 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffc385a2_1649_4e2b_943a_144ac4722ce9.slice/crio-d6a7ed6d8148f0e831e563590088d62b91268d8a77cfe67fd7bba120542fb9dd WatchSource:0}: Error finding container d6a7ed6d8148f0e831e563590088d62b91268d8a77cfe67fd7bba120542fb9dd: Status 404 returned error can't find the container with id d6a7ed6d8148f0e831e563590088d62b91268d8a77cfe67fd7bba120542fb9dd Jan 27 18:34:04 crc kubenswrapper[5049]: I0127 18:34:04.699891 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:34:04 crc kubenswrapper[5049]: I0127 18:34:04.700477 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api-log" containerID="cri-o://09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce" gracePeriod=30 Jan 27 18:34:04 crc kubenswrapper[5049]: I0127 18:34:04.700986 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api" containerID="cri-o://272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311" gracePeriod=30 Jan 27 18:34:04 crc kubenswrapper[5049]: I0127 18:34:04.730254 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.331437 5049 generic.go:334] "Generic (PLEG): container finished" podID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerID="09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce" exitCode=143 Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.331682 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f28a351-8073-4e1b-ba9b-c7824994fa1b","Type":"ContainerDied","Data":"09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce"} Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.333423 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ffc385a2-1649-4e2b-943a-144ac4722ce9","Type":"ContainerStarted","Data":"a1786eb7c9a209c7c07fb911bda28f31d8694a37270feccd84c1e40a7ce8eefc"} Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.333443 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ffc385a2-1649-4e2b-943a-144ac4722ce9","Type":"ContainerStarted","Data":"d6a7ed6d8148f0e831e563590088d62b91268d8a77cfe67fd7bba120542fb9dd"} Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.527007 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.528626 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.532336 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.544065 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631246 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bbe41245-d028-4fc4-bce8-d166cb88403d-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631582 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631643 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631711 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631755 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631776 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631810 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631846 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631872 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631889 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631903 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631916 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-run\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631957 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.631977 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgwml\" (UniqueName: \"kubernetes.io/projected/bbe41245-d028-4fc4-bce8-d166cb88403d-kube-api-access-jgwml\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.632008 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.632022 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733520 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733563 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733608 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733631 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733665 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733727 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733764 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733792 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733817 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733842 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-run\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733896 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733935 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgwml\" (UniqueName: \"kubernetes.io/projected/bbe41245-d028-4fc4-bce8-d166cb88403d-kube-api-access-jgwml\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733961 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.733979 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.734002 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bbe41245-d028-4fc4-bce8-d166cb88403d-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.734022 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.734096 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.734301 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.734327 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.735148 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.735185 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.735246 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-sys\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.735325 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.735889 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-run\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.735983 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.736079 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bbe41245-d028-4fc4-bce8-d166cb88403d-dev\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.737638 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.740882 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bbe41245-d028-4fc4-bce8-d166cb88403d-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.741147 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.741762 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.741908 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.756692 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbe41245-d028-4fc4-bce8-d166cb88403d-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.758215 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgwml\" (UniqueName: \"kubernetes.io/projected/bbe41245-d028-4fc4-bce8-d166cb88403d-kube-api-access-jgwml\") pod \"cinder-volume-volume1-0\" (UID: \"bbe41245-d028-4fc4-bce8-d166cb88403d\") " pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:05 crc kubenswrapper[5049]: I0127 18:34:05.903878 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.156779 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.158510 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.161282 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.181724 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.244594 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.244991 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245024 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245074 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sq5s\" (UniqueName: \"kubernetes.io/projected/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-kube-api-access-7sq5s\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245161 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-scripts\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245280 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-ceph\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245358 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245433 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-run\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245543 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245608 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-config-data\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245643 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-sys\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245711 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-lib-modules\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245788 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245886 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-dev\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.245949 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.246006 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.343844 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ffc385a2-1649-4e2b-943a-144ac4722ce9","Type":"ContainerStarted","Data":"15e6242b08f2a052b86a05d9b5b5c9588e661aeb6abecc6235c86009a4222116"} Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348039 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-ceph\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348125 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348148 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-run\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348168 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348189 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-config-data\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348210 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-sys\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348232 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-lib-modules\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348254 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348279 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-dev\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348299 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348315 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348349 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348377 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348398 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348433 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sq5s\" (UniqueName: \"kubernetes.io/projected/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-kube-api-access-7sq5s\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348470 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-scripts\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.348746 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.349500 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-nvme\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.349564 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.349596 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-run\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.349629 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-sys\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.349789 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-lib-modules\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.349974 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.350003 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.350022 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.350037 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-dev\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.354040 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.354731 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-config-data\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.355142 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-scripts\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.367171 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-ceph\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.368886 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.368871831 podStartE2EDuration="3.368871831s" podCreationTimestamp="2026-01-27 18:34:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:34:06.36563172 +0000 UTC m=+5821.464605269" watchObservedRunningTime="2026-01-27 18:34:06.368871831 +0000 UTC m=+5821.467845380" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.370881 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-config-data-custom\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.376416 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sq5s\" (UniqueName: \"kubernetes.io/projected/ae55fc0c-fd54-4e6a-a3ae-89d2e764d789-kube-api-access-7sq5s\") pod \"cinder-backup-0\" (UID: \"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789\") " pod="openstack/cinder-backup-0" Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.477032 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 27 18:34:06 crc kubenswrapper[5049]: I0127 18:34:06.509551 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 27 18:34:07 crc kubenswrapper[5049]: I0127 18:34:07.041451 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 27 18:34:07 crc kubenswrapper[5049]: I0127 18:34:07.361804 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789","Type":"ContainerStarted","Data":"6374ce1c05d5867ba506d26d03765181bcf0c003dec70ad267ee5b70a098e528"} Jan 27 18:34:07 crc kubenswrapper[5049]: I0127 18:34:07.364225 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"bbe41245-d028-4fc4-bce8-d166cb88403d","Type":"ContainerStarted","Data":"02a3ec14cb2c487b192898887f77570ad44f941a4415252778a2de8b2db7e11c"} Jan 27 18:34:07 crc kubenswrapper[5049]: I0127 18:34:07.861295 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.82:8776/healthcheck\": read tcp 10.217.0.2:39056->10.217.1.82:8776: read: connection reset by peer" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.259748 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.304020 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-scripts\") pod \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.304120 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f28a351-8073-4e1b-ba9b-c7824994fa1b-etc-machine-id\") pod \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.304198 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data-custom\") pod \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.304218 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxxhw\" (UniqueName: \"kubernetes.io/projected/6f28a351-8073-4e1b-ba9b-c7824994fa1b-kube-api-access-mxxhw\") pod \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.304225 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f28a351-8073-4e1b-ba9b-c7824994fa1b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6f28a351-8073-4e1b-ba9b-c7824994fa1b" (UID: "6f28a351-8073-4e1b-ba9b-c7824994fa1b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.304333 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-combined-ca-bundle\") pod \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.304372 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f28a351-8073-4e1b-ba9b-c7824994fa1b-logs\") pod \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.304418 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data\") pod \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\" (UID: \"6f28a351-8073-4e1b-ba9b-c7824994fa1b\") " Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.305317 5049 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f28a351-8073-4e1b-ba9b-c7824994fa1b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.307921 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f28a351-8073-4e1b-ba9b-c7824994fa1b-logs" (OuterVolumeSpecName: "logs") pod "6f28a351-8073-4e1b-ba9b-c7824994fa1b" (UID: "6f28a351-8073-4e1b-ba9b-c7824994fa1b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.309935 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-scripts" (OuterVolumeSpecName: "scripts") pod "6f28a351-8073-4e1b-ba9b-c7824994fa1b" (UID: "6f28a351-8073-4e1b-ba9b-c7824994fa1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.312017 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6f28a351-8073-4e1b-ba9b-c7824994fa1b" (UID: "6f28a351-8073-4e1b-ba9b-c7824994fa1b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.313990 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f28a351-8073-4e1b-ba9b-c7824994fa1b-kube-api-access-mxxhw" (OuterVolumeSpecName: "kube-api-access-mxxhw") pod "6f28a351-8073-4e1b-ba9b-c7824994fa1b" (UID: "6f28a351-8073-4e1b-ba9b-c7824994fa1b"). InnerVolumeSpecName "kube-api-access-mxxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.386377 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data" (OuterVolumeSpecName: "config-data") pod "6f28a351-8073-4e1b-ba9b-c7824994fa1b" (UID: "6f28a351-8073-4e1b-ba9b-c7824994fa1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.395605 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789","Type":"ContainerStarted","Data":"e7cf9bf8131b0275c32c6a36bd70a3c35bc9c2646f236e30ba322ed3c226accc"} Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.398961 5049 generic.go:334] "Generic (PLEG): container finished" podID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerID="272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311" exitCode=0 Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.399019 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f28a351-8073-4e1b-ba9b-c7824994fa1b","Type":"ContainerDied","Data":"272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311"} Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.399044 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f28a351-8073-4e1b-ba9b-c7824994fa1b","Type":"ContainerDied","Data":"4293bc4a6d9b24e2097d94459e18043a039c57b775d224abab5962682cc4a149"} Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.399060 5049 scope.go:117] "RemoveContainer" containerID="272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.399172 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.409666 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.409710 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.409719 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxxhw\" (UniqueName: \"kubernetes.io/projected/6f28a351-8073-4e1b-ba9b-c7824994fa1b-kube-api-access-mxxhw\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.409728 5049 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f28a351-8073-4e1b-ba9b-c7824994fa1b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.409739 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.419133 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f28a351-8073-4e1b-ba9b-c7824994fa1b" (UID: "6f28a351-8073-4e1b-ba9b-c7824994fa1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.425534 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"bbe41245-d028-4fc4-bce8-d166cb88403d","Type":"ContainerStarted","Data":"0af8d13c893891a8bb70fa73806ad5be6844173326c9de390b9a6ce4ddd648e0"} Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.425584 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"bbe41245-d028-4fc4-bce8-d166cb88403d","Type":"ContainerStarted","Data":"cb7e75852fe869d87ec5430b7bd3a2646af0d0d7aac71eedcd6ce75bdf4454f9"} Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.450766 5049 scope.go:117] "RemoveContainer" containerID="09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.457942 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.665491389 podStartE2EDuration="3.457923632s" podCreationTimestamp="2026-01-27 18:34:05 +0000 UTC" firstStartedPulling="2026-01-27 18:34:06.481072289 +0000 UTC m=+5821.580045838" lastFinishedPulling="2026-01-27 18:34:07.273504532 +0000 UTC m=+5822.372478081" observedRunningTime="2026-01-27 18:34:08.451349946 +0000 UTC m=+5823.550323495" watchObservedRunningTime="2026-01-27 18:34:08.457923632 +0000 UTC m=+5823.556897171" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.496168 5049 scope.go:117] "RemoveContainer" containerID="272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311" Jan 27 18:34:08 crc kubenswrapper[5049]: E0127 18:34:08.496663 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311\": container with ID starting with 272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311 not found: ID does not exist" containerID="272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.496735 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311"} err="failed to get container status \"272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311\": rpc error: code = NotFound desc = could not find container \"272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311\": container with ID starting with 272b264bb350d28af1216739dc61deba8a22219a7d6e5ffcc2d7af9504278311 not found: ID does not exist" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.496761 5049 scope.go:117] "RemoveContainer" containerID="09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce" Jan 27 18:34:08 crc kubenswrapper[5049]: E0127 18:34:08.497122 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce\": container with ID starting with 09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce not found: ID does not exist" containerID="09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.497175 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce"} err="failed to get container status \"09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce\": rpc error: code = NotFound desc = could not find container \"09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce\": container with ID starting with 09f65afd974212066fae8a522643bbf54a3f24f444361580103a89e8e95bb5ce not found: ID does not exist" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.511697 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f28a351-8073-4e1b-ba9b-c7824994fa1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.773036 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.773184 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.773249 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.782769 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.788023 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.803752 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.839892 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:34:08 crc kubenswrapper[5049]: E0127 18:34:08.840872 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.840948 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api" Jan 27 18:34:08 crc kubenswrapper[5049]: E0127 18:34:08.841010 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api-log" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.841071 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api-log" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.841311 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.841393 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" containerName="cinder-api-log" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.842454 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.853337 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.870288 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.920527 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3756b8fa-b794-4021-b68f-1bb730f59b03-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.920611 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.920663 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8dxv\" (UniqueName: \"kubernetes.io/projected/3756b8fa-b794-4021-b68f-1bb730f59b03-kube-api-access-m8dxv\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.920833 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3756b8fa-b794-4021-b68f-1bb730f59b03-logs\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.920871 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-config-data-custom\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.920898 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-scripts\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.920931 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-config-data\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:08 crc kubenswrapper[5049]: I0127 18:34:08.923478 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.026911 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3756b8fa-b794-4021-b68f-1bb730f59b03-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.027115 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.027244 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8dxv\" (UniqueName: \"kubernetes.io/projected/3756b8fa-b794-4021-b68f-1bb730f59b03-kube-api-access-m8dxv\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.027365 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3756b8fa-b794-4021-b68f-1bb730f59b03-logs\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.027411 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-config-data-custom\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.027449 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-scripts\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.027497 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-config-data\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.028185 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3756b8fa-b794-4021-b68f-1bb730f59b03-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.050262 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3756b8fa-b794-4021-b68f-1bb730f59b03-logs\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.055241 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-config-data\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.055375 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-config-data-custom\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.057525 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.067308 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3756b8fa-b794-4021-b68f-1bb730f59b03-scripts\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.073546 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8dxv\" (UniqueName: \"kubernetes.io/projected/3756b8fa-b794-4021-b68f-1bb730f59b03-kube-api-access-m8dxv\") pod \"cinder-api-0\" (UID: \"3756b8fa-b794-4021-b68f-1bb730f59b03\") " pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.207455 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.441515 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"ae55fc0c-fd54-4e6a-a3ae-89d2e764d789","Type":"ContainerStarted","Data":"bb32d112e82f9b9f00a69bef0c0fedb6ac4bfa80dccabfa82b6bf96183e2dc4e"} Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.470899 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.651070946 podStartE2EDuration="3.470880651s" podCreationTimestamp="2026-01-27 18:34:06 +0000 UTC" firstStartedPulling="2026-01-27 18:34:07.051530265 +0000 UTC m=+5822.150503814" lastFinishedPulling="2026-01-27 18:34:07.87133997 +0000 UTC m=+5822.970313519" observedRunningTime="2026-01-27 18:34:09.465321504 +0000 UTC m=+5824.564295053" watchObservedRunningTime="2026-01-27 18:34:09.470880651 +0000 UTC m=+5824.569854200" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.476022 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.476083 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.476833 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.476879 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.479060 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.481134 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.660099 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f28a351-8073-4e1b-ba9b-c7824994fa1b" path="/var/lib/kubelet/pods/6f28a351-8073-4e1b-ba9b-c7824994fa1b/volumes" Jan 27 18:34:09 crc kubenswrapper[5049]: I0127 18:34:09.668051 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 18:34:10 crc kubenswrapper[5049]: I0127 18:34:10.462751 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3756b8fa-b794-4021-b68f-1bb730f59b03","Type":"ContainerStarted","Data":"bd86ec7ccb53e0f92bc4bd3bee0beaf0d76df6bf8721ca172585d304e5552a36"} Jan 27 18:34:10 crc kubenswrapper[5049]: I0127 18:34:10.463076 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3756b8fa-b794-4021-b68f-1bb730f59b03","Type":"ContainerStarted","Data":"17edd65d6dc38f2e4d9f8ab37e9e88660818a3f8d49b9bd71220f3f4a1a001a5"} Jan 27 18:34:10 crc kubenswrapper[5049]: I0127 18:34:10.904053 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:11 crc kubenswrapper[5049]: I0127 18:34:11.474326 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3756b8fa-b794-4021-b68f-1bb730f59b03","Type":"ContainerStarted","Data":"cc00562fddcb93130706b10d43d6135ba60d0be95799c554bc6350b012cd553c"} Jan 27 18:34:11 crc kubenswrapper[5049]: I0127 18:34:11.474464 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 18:34:11 crc kubenswrapper[5049]: I0127 18:34:11.505178 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.505152655 podStartE2EDuration="3.505152655s" podCreationTimestamp="2026-01-27 18:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:34:11.493687841 +0000 UTC m=+5826.592661390" watchObservedRunningTime="2026-01-27 18:34:11.505152655 +0000 UTC m=+5826.604126214" Jan 27 18:34:11 crc kubenswrapper[5049]: I0127 18:34:11.510878 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 27 18:34:14 crc kubenswrapper[5049]: I0127 18:34:14.134389 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 18:34:14 crc kubenswrapper[5049]: I0127 18:34:14.190004 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:14 crc kubenswrapper[5049]: I0127 18:34:14.506799 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerName="cinder-scheduler" containerID="cri-o://a1786eb7c9a209c7c07fb911bda28f31d8694a37270feccd84c1e40a7ce8eefc" gracePeriod=30 Jan 27 18:34:14 crc kubenswrapper[5049]: I0127 18:34:14.506946 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerName="probe" containerID="cri-o://15e6242b08f2a052b86a05d9b5b5c9588e661aeb6abecc6235c86009a4222116" gracePeriod=30 Jan 27 18:34:15 crc kubenswrapper[5049]: I0127 18:34:15.518180 5049 generic.go:334] "Generic (PLEG): container finished" podID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerID="15e6242b08f2a052b86a05d9b5b5c9588e661aeb6abecc6235c86009a4222116" exitCode=0 Jan 27 18:34:15 crc kubenswrapper[5049]: I0127 18:34:15.518261 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ffc385a2-1649-4e2b-943a-144ac4722ce9","Type":"ContainerDied","Data":"15e6242b08f2a052b86a05d9b5b5c9588e661aeb6abecc6235c86009a4222116"} Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.119219 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.537607 5049 generic.go:334] "Generic (PLEG): container finished" podID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerID="a1786eb7c9a209c7c07fb911bda28f31d8694a37270feccd84c1e40a7ce8eefc" exitCode=0 Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.537658 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ffc385a2-1649-4e2b-943a-144ac4722ce9","Type":"ContainerDied","Data":"a1786eb7c9a209c7c07fb911bda28f31d8694a37270feccd84c1e40a7ce8eefc"} Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.699392 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.761511 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.784665 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc385a2-1649-4e2b-943a-144ac4722ce9-etc-machine-id\") pod \"ffc385a2-1649-4e2b-943a-144ac4722ce9\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.784911 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data\") pod \"ffc385a2-1649-4e2b-943a-144ac4722ce9\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.785025 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-combined-ca-bundle\") pod \"ffc385a2-1649-4e2b-943a-144ac4722ce9\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.785105 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-scripts\") pod \"ffc385a2-1649-4e2b-943a-144ac4722ce9\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.785183 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml49m\" (UniqueName: \"kubernetes.io/projected/ffc385a2-1649-4e2b-943a-144ac4722ce9-kube-api-access-ml49m\") pod \"ffc385a2-1649-4e2b-943a-144ac4722ce9\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.785363 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data-custom\") pod \"ffc385a2-1649-4e2b-943a-144ac4722ce9\" (UID: \"ffc385a2-1649-4e2b-943a-144ac4722ce9\") " Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.786555 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffc385a2-1649-4e2b-943a-144ac4722ce9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ffc385a2-1649-4e2b-943a-144ac4722ce9" (UID: "ffc385a2-1649-4e2b-943a-144ac4722ce9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.808177 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc385a2-1649-4e2b-943a-144ac4722ce9-kube-api-access-ml49m" (OuterVolumeSpecName: "kube-api-access-ml49m") pod "ffc385a2-1649-4e2b-943a-144ac4722ce9" (UID: "ffc385a2-1649-4e2b-943a-144ac4722ce9"). InnerVolumeSpecName "kube-api-access-ml49m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.809071 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ffc385a2-1649-4e2b-943a-144ac4722ce9" (UID: "ffc385a2-1649-4e2b-943a-144ac4722ce9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.810725 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-scripts" (OuterVolumeSpecName: "scripts") pod "ffc385a2-1649-4e2b-943a-144ac4722ce9" (UID: "ffc385a2-1649-4e2b-943a-144ac4722ce9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.890853 5049 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffc385a2-1649-4e2b-943a-144ac4722ce9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.891082 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.891151 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml49m\" (UniqueName: \"kubernetes.io/projected/ffc385a2-1649-4e2b-943a-144ac4722ce9-kube-api-access-ml49m\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.891266 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.907378 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffc385a2-1649-4e2b-943a-144ac4722ce9" (UID: "ffc385a2-1649-4e2b-943a-144ac4722ce9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.919804 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data" (OuterVolumeSpecName: "config-data") pod "ffc385a2-1649-4e2b-943a-144ac4722ce9" (UID: "ffc385a2-1649-4e2b-943a-144ac4722ce9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.993614 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:16 crc kubenswrapper[5049]: I0127 18:34:16.993663 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc385a2-1649-4e2b-943a-144ac4722ce9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.560987 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ffc385a2-1649-4e2b-943a-144ac4722ce9","Type":"ContainerDied","Data":"d6a7ed6d8148f0e831e563590088d62b91268d8a77cfe67fd7bba120542fb9dd"} Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.561088 5049 scope.go:117] "RemoveContainer" containerID="15e6242b08f2a052b86a05d9b5b5c9588e661aeb6abecc6235c86009a4222116" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.561323 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.592179 5049 scope.go:117] "RemoveContainer" containerID="a1786eb7c9a209c7c07fb911bda28f31d8694a37270feccd84c1e40a7ce8eefc" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.603254 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.613616 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.633313 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:17 crc kubenswrapper[5049]: E0127 18:34:17.634060 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerName="cinder-scheduler" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.634079 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerName="cinder-scheduler" Jan 27 18:34:17 crc kubenswrapper[5049]: E0127 18:34:17.634103 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerName="probe" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.634110 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerName="probe" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.634311 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerName="cinder-scheduler" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.634332 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" containerName="probe" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.635598 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.644719 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.649879 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.655979 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc385a2-1649-4e2b-943a-144ac4722ce9" path="/var/lib/kubelet/pods/ffc385a2-1649-4e2b-943a-144ac4722ce9/volumes" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.707117 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lgsc\" (UniqueName: \"kubernetes.io/projected/62aac37e-a0bb-428e-b094-71a7fd36f533-kube-api-access-5lgsc\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.707172 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-scripts\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.707208 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-config-data\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.707274 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.707338 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62aac37e-a0bb-428e-b094-71a7fd36f533-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.707357 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.809376 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lgsc\" (UniqueName: \"kubernetes.io/projected/62aac37e-a0bb-428e-b094-71a7fd36f533-kube-api-access-5lgsc\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.809433 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-scripts\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.809468 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-config-data\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.809505 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.809587 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62aac37e-a0bb-428e-b094-71a7fd36f533-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.809609 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.810036 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/62aac37e-a0bb-428e-b094-71a7fd36f533-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.825028 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-scripts\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.825525 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-config-data\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.825568 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.828741 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/62aac37e-a0bb-428e-b094-71a7fd36f533-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.831207 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lgsc\" (UniqueName: \"kubernetes.io/projected/62aac37e-a0bb-428e-b094-71a7fd36f533-kube-api-access-5lgsc\") pod \"cinder-scheduler-0\" (UID: \"62aac37e-a0bb-428e-b094-71a7fd36f533\") " pod="openstack/cinder-scheduler-0" Jan 27 18:34:17 crc kubenswrapper[5049]: I0127 18:34:17.974360 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 18:34:18 crc kubenswrapper[5049]: I0127 18:34:18.441146 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 18:34:18 crc kubenswrapper[5049]: I0127 18:34:18.572433 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"62aac37e-a0bb-428e-b094-71a7fd36f533","Type":"ContainerStarted","Data":"1858d55c486b52366864b33ea58298d9e9149701d917bddb585361ebcbe7f936"} Jan 27 18:34:19 crc kubenswrapper[5049]: I0127 18:34:19.587650 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"62aac37e-a0bb-428e-b094-71a7fd36f533","Type":"ContainerStarted","Data":"7977d494690907e22cbce41d94f9e0b5e24eda3c3416d7c2446f70449b9f8c0e"} Jan 27 18:34:20 crc kubenswrapper[5049]: I0127 18:34:20.597127 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"62aac37e-a0bb-428e-b094-71a7fd36f533","Type":"ContainerStarted","Data":"c0c1e7998a7036c5d06a675ca2d464e26e217ae8349134780bc93bfad855a5b7"} Jan 27 18:34:20 crc kubenswrapper[5049]: I0127 18:34:20.627059 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.627033237 podStartE2EDuration="3.627033237s" podCreationTimestamp="2026-01-27 18:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:34:20.617321073 +0000 UTC m=+5835.716294642" watchObservedRunningTime="2026-01-27 18:34:20.627033237 +0000 UTC m=+5835.726006796" Jan 27 18:34:21 crc kubenswrapper[5049]: I0127 18:34:21.119476 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 18:34:22 crc kubenswrapper[5049]: I0127 18:34:22.975493 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 18:34:28 crc kubenswrapper[5049]: I0127 18:34:28.168793 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 18:35:57 crc kubenswrapper[5049]: I0127 18:35:57.052818 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-hmnhj"] Jan 27 18:35:57 crc kubenswrapper[5049]: I0127 18:35:57.065460 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7eec-account-create-update-xbn8m"] Jan 27 18:35:57 crc kubenswrapper[5049]: I0127 18:35:57.073947 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-hmnhj"] Jan 27 18:35:57 crc kubenswrapper[5049]: I0127 18:35:57.081157 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7eec-account-create-update-xbn8m"] Jan 27 18:35:57 crc kubenswrapper[5049]: I0127 18:35:57.660945 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e4e3b8-37e1-45ab-ba2f-d9e426926055" path="/var/lib/kubelet/pods/51e4e3b8-37e1-45ab-ba2f-d9e426926055/volumes" Jan 27 18:35:57 crc kubenswrapper[5049]: I0127 18:35:57.661935 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56a71484-f8e0-4b87-91a8-d1e16dd46958" path="/var/lib/kubelet/pods/56a71484-f8e0-4b87-91a8-d1e16dd46958/volumes" Jan 27 18:36:03 crc kubenswrapper[5049]: I0127 18:36:03.032027 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-z8f98"] Jan 27 18:36:03 crc kubenswrapper[5049]: I0127 18:36:03.040592 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-z8f98"] Jan 27 18:36:03 crc kubenswrapper[5049]: I0127 18:36:03.656479 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f86ad93-5ff6-419f-a10a-b88ce9d4706d" path="/var/lib/kubelet/pods/9f86ad93-5ff6-419f-a10a-b88ce9d4706d/volumes" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.807355 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-grm4c"] Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.809426 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.811374 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.813542 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-j7sjx" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.834962 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grm4c"] Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.846330 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-fqwht"] Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.853849 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.879464 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fqwht"] Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.946777 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-run\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.946876 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-etc-ovs\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.946917 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-log\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.947069 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cn82\" (UniqueName: \"kubernetes.io/projected/355c8da6-76d2-4246-b327-403f1c9aa64c-kube-api-access-2cn82\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.947111 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-run-ovn\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.947160 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhcz4\" (UniqueName: \"kubernetes.io/projected/30bb024b-8bc7-45cf-b794-5b9039e4b334-kube-api-access-jhcz4\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.947285 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/355c8da6-76d2-4246-b327-403f1c9aa64c-scripts\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.947320 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-log-ovn\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.947384 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-run\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.947605 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb024b-8bc7-45cf-b794-5b9039e4b334-scripts\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:08 crc kubenswrapper[5049]: I0127 18:36:08.947730 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-lib\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.049479 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-run\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.049932 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-etc-ovs\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.049863 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-run\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.050008 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-log\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.050101 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-log\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.050133 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cn82\" (UniqueName: \"kubernetes.io/projected/355c8da6-76d2-4246-b327-403f1c9aa64c-kube-api-access-2cn82\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.050162 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-run-ovn\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.050153 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-etc-ovs\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.050231 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-run-ovn\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.050299 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhcz4\" (UniqueName: \"kubernetes.io/projected/30bb024b-8bc7-45cf-b794-5b9039e4b334-kube-api-access-jhcz4\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.050761 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/355c8da6-76d2-4246-b327-403f1c9aa64c-scripts\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.051002 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-log-ovn\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.053279 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/355c8da6-76d2-4246-b327-403f1c9aa64c-scripts\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.053325 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-log-ovn\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.053423 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-run\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.053520 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30bb024b-8bc7-45cf-b794-5b9039e4b334-var-run\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.053628 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb024b-8bc7-45cf-b794-5b9039e4b334-scripts\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.053685 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-lib\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.053822 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/355c8da6-76d2-4246-b327-403f1c9aa64c-var-lib\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.055516 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb024b-8bc7-45cf-b794-5b9039e4b334-scripts\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.069378 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhcz4\" (UniqueName: \"kubernetes.io/projected/30bb024b-8bc7-45cf-b794-5b9039e4b334-kube-api-access-jhcz4\") pod \"ovn-controller-grm4c\" (UID: \"30bb024b-8bc7-45cf-b794-5b9039e4b334\") " pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.072073 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cn82\" (UniqueName: \"kubernetes.io/projected/355c8da6-76d2-4246-b327-403f1c9aa64c-kube-api-access-2cn82\") pod \"ovn-controller-ovs-fqwht\" (UID: \"355c8da6-76d2-4246-b327-403f1c9aa64c\") " pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.134329 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.177251 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:09 crc kubenswrapper[5049]: I0127 18:36:09.638153 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grm4c"] Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.067681 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-fqwht"] Jan 27 18:36:10 crc kubenswrapper[5049]: W0127 18:36:10.074867 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod355c8da6_76d2_4246_b327_403f1c9aa64c.slice/crio-c88566178defd0a551ea1a27327d27996a4b58571203f1fded738fc109aa29d8 WatchSource:0}: Error finding container c88566178defd0a551ea1a27327d27996a4b58571203f1fded738fc109aa29d8: Status 404 returned error can't find the container with id c88566178defd0a551ea1a27327d27996a4b58571203f1fded738fc109aa29d8 Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.313762 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2zf4b"] Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.316462 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.319765 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.346391 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2zf4b"] Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.486630 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-config\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.486716 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5nr\" (UniqueName: \"kubernetes.io/projected/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-kube-api-access-2h5nr\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.486882 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-ovn-rundir\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.486913 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-ovs-rundir\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.588577 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-config\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.588653 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5nr\" (UniqueName: \"kubernetes.io/projected/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-kube-api-access-2h5nr\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.588817 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-ovn-rundir\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.588857 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-ovs-rundir\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.589075 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-ovn-rundir\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.589090 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-ovs-rundir\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.589359 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-config\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.613925 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5nr\" (UniqueName: \"kubernetes.io/projected/6e5eaf59-09f4-4908-802b-9e1e58f6aa11-kube-api-access-2h5nr\") pod \"ovn-controller-metrics-2zf4b\" (UID: \"6e5eaf59-09f4-4908-802b-9e1e58f6aa11\") " pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.627484 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fqwht" event={"ID":"355c8da6-76d2-4246-b327-403f1c9aa64c","Type":"ContainerStarted","Data":"35f46a59e92abbe37080c270be58e97450f268b449687baca8d1a6dce373874b"} Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.627536 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fqwht" event={"ID":"355c8da6-76d2-4246-b327-403f1c9aa64c","Type":"ContainerStarted","Data":"c88566178defd0a551ea1a27327d27996a4b58571203f1fded738fc109aa29d8"} Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.630357 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c" event={"ID":"30bb024b-8bc7-45cf-b794-5b9039e4b334","Type":"ContainerStarted","Data":"50f271eb8586e2aedf973a361c8c4422d447bb6aa56f76e80104989f959a8412"} Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.630414 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c" event={"ID":"30bb024b-8bc7-45cf-b794-5b9039e4b334","Type":"ContainerStarted","Data":"a4ea0b42ecef5b9ad5b435054778abeb16b27665f82d1b071e84bd1ba1c1862c"} Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.630574 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-grm4c" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.661133 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2zf4b" Jan 27 18:36:10 crc kubenswrapper[5049]: I0127 18:36:10.667111 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-grm4c" podStartSLOduration=2.667088326 podStartE2EDuration="2.667088326s" podCreationTimestamp="2026-01-27 18:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:36:10.663841054 +0000 UTC m=+5945.762814603" watchObservedRunningTime="2026-01-27 18:36:10.667088326 +0000 UTC m=+5945.766061875" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.004882 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-s84fx"] Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.007139 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.017140 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-s84fx"] Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.115918 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2zf4b"] Jan 27 18:36:11 crc kubenswrapper[5049]: W0127 18:36:11.119217 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e5eaf59_09f4_4908_802b_9e1e58f6aa11.slice/crio-320edc3a95f94419b254313bb936891466be55a4b680a562434cfdd96ae4090b WatchSource:0}: Error finding container 320edc3a95f94419b254313bb936891466be55a4b680a562434cfdd96ae4090b: Status 404 returned error can't find the container with id 320edc3a95f94419b254313bb936891466be55a4b680a562434cfdd96ae4090b Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.201795 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d8f816-9132-4994-8c07-0dfa0dbc1726-operator-scripts\") pod \"octavia-db-create-s84fx\" (UID: \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\") " pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.201881 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vxzt\" (UniqueName: \"kubernetes.io/projected/b2d8f816-9132-4994-8c07-0dfa0dbc1726-kube-api-access-2vxzt\") pod \"octavia-db-create-s84fx\" (UID: \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\") " pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.304290 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d8f816-9132-4994-8c07-0dfa0dbc1726-operator-scripts\") pod \"octavia-db-create-s84fx\" (UID: \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\") " pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.304393 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vxzt\" (UniqueName: \"kubernetes.io/projected/b2d8f816-9132-4994-8c07-0dfa0dbc1726-kube-api-access-2vxzt\") pod \"octavia-db-create-s84fx\" (UID: \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\") " pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.305821 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d8f816-9132-4994-8c07-0dfa0dbc1726-operator-scripts\") pod \"octavia-db-create-s84fx\" (UID: \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\") " pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.324739 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vxzt\" (UniqueName: \"kubernetes.io/projected/b2d8f816-9132-4994-8c07-0dfa0dbc1726-kube-api-access-2vxzt\") pod \"octavia-db-create-s84fx\" (UID: \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\") " pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.325408 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.641093 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2zf4b" event={"ID":"6e5eaf59-09f4-4908-802b-9e1e58f6aa11","Type":"ContainerStarted","Data":"0aaf71d9b6413ef64716d83897a80604a544edf436ccf45354b38693cc9d9101"} Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.641377 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2zf4b" event={"ID":"6e5eaf59-09f4-4908-802b-9e1e58f6aa11","Type":"ContainerStarted","Data":"320edc3a95f94419b254313bb936891466be55a4b680a562434cfdd96ae4090b"} Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.643277 5049 generic.go:334] "Generic (PLEG): container finished" podID="355c8da6-76d2-4246-b327-403f1c9aa64c" containerID="35f46a59e92abbe37080c270be58e97450f268b449687baca8d1a6dce373874b" exitCode=0 Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.643425 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fqwht" event={"ID":"355c8da6-76d2-4246-b327-403f1c9aa64c","Type":"ContainerDied","Data":"35f46a59e92abbe37080c270be58e97450f268b449687baca8d1a6dce373874b"} Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.702667 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2zf4b" podStartSLOduration=1.702648744 podStartE2EDuration="1.702648744s" podCreationTimestamp="2026-01-27 18:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:36:11.699198436 +0000 UTC m=+5946.798171995" watchObservedRunningTime="2026-01-27 18:36:11.702648744 +0000 UTC m=+5946.801622303" Jan 27 18:36:11 crc kubenswrapper[5049]: I0127 18:36:11.906509 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-s84fx"] Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.522598 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-e6f8-account-create-update-q64ft"] Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.524683 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.526718 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.534432 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e6f8-account-create-update-q64ft"] Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.551935 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zc2m\" (UniqueName: \"kubernetes.io/projected/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-kube-api-access-7zc2m\") pod \"octavia-e6f8-account-create-update-q64ft\" (UID: \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\") " pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.552054 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-operator-scripts\") pod \"octavia-e6f8-account-create-update-q64ft\" (UID: \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\") " pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.652811 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-operator-scripts\") pod \"octavia-e6f8-account-create-update-q64ft\" (UID: \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\") " pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.653054 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zc2m\" (UniqueName: \"kubernetes.io/projected/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-kube-api-access-7zc2m\") pod \"octavia-e6f8-account-create-update-q64ft\" (UID: \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\") " pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.653719 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-operator-scripts\") pod \"octavia-e6f8-account-create-update-q64ft\" (UID: \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\") " pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.658919 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fqwht" event={"ID":"355c8da6-76d2-4246-b327-403f1c9aa64c","Type":"ContainerStarted","Data":"909bf1dff4a57b9faf5f289b055fb3d8197ee69c34526d4d9907caa69a352851"} Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.658972 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-fqwht" event={"ID":"355c8da6-76d2-4246-b327-403f1c9aa64c","Type":"ContainerStarted","Data":"360ba4baeafee46568acac26d6bfb9d4c3d105312d86eba808b319ca99b4c7fe"} Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.659429 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.659659 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.660850 5049 generic.go:334] "Generic (PLEG): container finished" podID="b2d8f816-9132-4994-8c07-0dfa0dbc1726" containerID="42479d220ad1c922424f712ae3dd2d4b906af1f358d576179b61b401f6600b18" exitCode=0 Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.661450 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-s84fx" event={"ID":"b2d8f816-9132-4994-8c07-0dfa0dbc1726","Type":"ContainerDied","Data":"42479d220ad1c922424f712ae3dd2d4b906af1f358d576179b61b401f6600b18"} Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.661482 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-s84fx" event={"ID":"b2d8f816-9132-4994-8c07-0dfa0dbc1726","Type":"ContainerStarted","Data":"62811110c9d27f3b072e288ace977ae44133de3cc862ff91ceec05b1ccf7e123"} Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.672419 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zc2m\" (UniqueName: \"kubernetes.io/projected/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-kube-api-access-7zc2m\") pod \"octavia-e6f8-account-create-update-q64ft\" (UID: \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\") " pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.679803 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-fqwht" podStartSLOduration=4.679785952 podStartE2EDuration="4.679785952s" podCreationTimestamp="2026-01-27 18:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:36:12.673537425 +0000 UTC m=+5947.772510984" watchObservedRunningTime="2026-01-27 18:36:12.679785952 +0000 UTC m=+5947.778759501" Jan 27 18:36:12 crc kubenswrapper[5049]: I0127 18:36:12.884839 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:13 crc kubenswrapper[5049]: I0127 18:36:13.386613 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e6f8-account-create-update-q64ft"] Jan 27 18:36:13 crc kubenswrapper[5049]: I0127 18:36:13.672350 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e6f8-account-create-update-q64ft" event={"ID":"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b","Type":"ContainerStarted","Data":"b390c4f3e448ca2bf751f06df616a787fc203572266a96b1b26783641e722e66"} Jan 27 18:36:13 crc kubenswrapper[5049]: I0127 18:36:13.674218 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e6f8-account-create-update-q64ft" event={"ID":"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b","Type":"ContainerStarted","Data":"ce3728515f5f5094776b9bc292fae1e5f37893977758853dfae4844566f6d142"} Jan 27 18:36:13 crc kubenswrapper[5049]: I0127 18:36:13.701508 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-e6f8-account-create-update-q64ft" podStartSLOduration=1.7014844569999998 podStartE2EDuration="1.701484457s" podCreationTimestamp="2026-01-27 18:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:36:13.688940793 +0000 UTC m=+5948.787914362" watchObservedRunningTime="2026-01-27 18:36:13.701484457 +0000 UTC m=+5948.800458026" Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.077443 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.091193 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d8f816-9132-4994-8c07-0dfa0dbc1726-operator-scripts\") pod \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\" (UID: \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\") " Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.091727 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vxzt\" (UniqueName: \"kubernetes.io/projected/b2d8f816-9132-4994-8c07-0dfa0dbc1726-kube-api-access-2vxzt\") pod \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\" (UID: \"b2d8f816-9132-4994-8c07-0dfa0dbc1726\") " Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.091662 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2d8f816-9132-4994-8c07-0dfa0dbc1726-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2d8f816-9132-4994-8c07-0dfa0dbc1726" (UID: "b2d8f816-9132-4994-8c07-0dfa0dbc1726"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.092693 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2d8f816-9132-4994-8c07-0dfa0dbc1726-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.101032 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d8f816-9132-4994-8c07-0dfa0dbc1726-kube-api-access-2vxzt" (OuterVolumeSpecName: "kube-api-access-2vxzt") pod "b2d8f816-9132-4994-8c07-0dfa0dbc1726" (UID: "b2d8f816-9132-4994-8c07-0dfa0dbc1726"). InnerVolumeSpecName "kube-api-access-2vxzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.194805 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vxzt\" (UniqueName: \"kubernetes.io/projected/b2d8f816-9132-4994-8c07-0dfa0dbc1726-kube-api-access-2vxzt\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.692320 5049 generic.go:334] "Generic (PLEG): container finished" podID="8a0442b3-58dd-4659-b3cb-cbc90ab9b80b" containerID="b390c4f3e448ca2bf751f06df616a787fc203572266a96b1b26783641e722e66" exitCode=0 Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.692488 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e6f8-account-create-update-q64ft" event={"ID":"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b","Type":"ContainerDied","Data":"b390c4f3e448ca2bf751f06df616a787fc203572266a96b1b26783641e722e66"} Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.700614 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-s84fx" event={"ID":"b2d8f816-9132-4994-8c07-0dfa0dbc1726","Type":"ContainerDied","Data":"62811110c9d27f3b072e288ace977ae44133de3cc862ff91ceec05b1ccf7e123"} Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.700737 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62811110c9d27f3b072e288ace977ae44133de3cc862ff91ceec05b1ccf7e123" Jan 27 18:36:14 crc kubenswrapper[5049]: I0127 18:36:14.700750 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-s84fx" Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.153998 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.238201 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-operator-scripts\") pod \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\" (UID: \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\") " Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.238369 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zc2m\" (UniqueName: \"kubernetes.io/projected/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-kube-api-access-7zc2m\") pod \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\" (UID: \"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b\") " Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.238599 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a0442b3-58dd-4659-b3cb-cbc90ab9b80b" (UID: "8a0442b3-58dd-4659-b3cb-cbc90ab9b80b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.238974 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.244869 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-kube-api-access-7zc2m" (OuterVolumeSpecName: "kube-api-access-7zc2m") pod "8a0442b3-58dd-4659-b3cb-cbc90ab9b80b" (UID: "8a0442b3-58dd-4659-b3cb-cbc90ab9b80b"). InnerVolumeSpecName "kube-api-access-7zc2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.339916 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zc2m\" (UniqueName: \"kubernetes.io/projected/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b-kube-api-access-7zc2m\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.726144 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e6f8-account-create-update-q64ft" event={"ID":"8a0442b3-58dd-4659-b3cb-cbc90ab9b80b","Type":"ContainerDied","Data":"ce3728515f5f5094776b9bc292fae1e5f37893977758853dfae4844566f6d142"} Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.726241 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce3728515f5f5094776b9bc292fae1e5f37893977758853dfae4844566f6d142" Jan 27 18:36:16 crc kubenswrapper[5049]: I0127 18:36:16.726356 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e6f8-account-create-update-q64ft" Jan 27 18:36:17 crc kubenswrapper[5049]: I0127 18:36:17.063034 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dq7vd"] Jan 27 18:36:17 crc kubenswrapper[5049]: I0127 18:36:17.080556 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dq7vd"] Jan 27 18:36:17 crc kubenswrapper[5049]: I0127 18:36:17.438430 5049 scope.go:117] "RemoveContainer" containerID="8a2311d36b4d5270800ca17ed852c09554837c96e98626c1f133e3ad58896d3f" Jan 27 18:36:17 crc kubenswrapper[5049]: I0127 18:36:17.469542 5049 scope.go:117] "RemoveContainer" containerID="962c0f07ae1e48ac1399fff055cecfe27764c8a0fbe0b7ba1b9487d161cd843e" Jan 27 18:36:17 crc kubenswrapper[5049]: I0127 18:36:17.533912 5049 scope.go:117] "RemoveContainer" containerID="671ad2ae68d44bdbaedad93b3c24adbbb9cf2d0f7b7a86a14d12c01aa15dd0ac" Jan 27 18:36:17 crc kubenswrapper[5049]: I0127 18:36:17.660893 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fca580b-5ec6-433f-872d-f2b2af2445f8" path="/var/lib/kubelet/pods/0fca580b-5ec6-433f-872d-f2b2af2445f8/volumes" Jan 27 18:36:17 crc kubenswrapper[5049]: I0127 18:36:17.781979 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:36:17 crc kubenswrapper[5049]: I0127 18:36:17.782139 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.074720 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-jrf2m"] Jan 27 18:36:19 crc kubenswrapper[5049]: E0127 18:36:19.075281 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0442b3-58dd-4659-b3cb-cbc90ab9b80b" containerName="mariadb-account-create-update" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.075303 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0442b3-58dd-4659-b3cb-cbc90ab9b80b" containerName="mariadb-account-create-update" Jan 27 18:36:19 crc kubenswrapper[5049]: E0127 18:36:19.075346 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d8f816-9132-4994-8c07-0dfa0dbc1726" containerName="mariadb-database-create" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.075357 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d8f816-9132-4994-8c07-0dfa0dbc1726" containerName="mariadb-database-create" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.075662 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d8f816-9132-4994-8c07-0dfa0dbc1726" containerName="mariadb-database-create" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.075805 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0442b3-58dd-4659-b3cb-cbc90ab9b80b" containerName="mariadb-account-create-update" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.076636 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.093697 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-jrf2m"] Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.096360 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9kw7\" (UniqueName: \"kubernetes.io/projected/13c7681e-c718-4c6a-9646-aba9f96018b0-kube-api-access-v9kw7\") pod \"octavia-persistence-db-create-jrf2m\" (UID: \"13c7681e-c718-4c6a-9646-aba9f96018b0\") " pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.096790 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c7681e-c718-4c6a-9646-aba9f96018b0-operator-scripts\") pod \"octavia-persistence-db-create-jrf2m\" (UID: \"13c7681e-c718-4c6a-9646-aba9f96018b0\") " pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.198446 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9kw7\" (UniqueName: \"kubernetes.io/projected/13c7681e-c718-4c6a-9646-aba9f96018b0-kube-api-access-v9kw7\") pod \"octavia-persistence-db-create-jrf2m\" (UID: \"13c7681e-c718-4c6a-9646-aba9f96018b0\") " pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.198551 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c7681e-c718-4c6a-9646-aba9f96018b0-operator-scripts\") pod \"octavia-persistence-db-create-jrf2m\" (UID: \"13c7681e-c718-4c6a-9646-aba9f96018b0\") " pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.199710 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c7681e-c718-4c6a-9646-aba9f96018b0-operator-scripts\") pod \"octavia-persistence-db-create-jrf2m\" (UID: \"13c7681e-c718-4c6a-9646-aba9f96018b0\") " pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.217969 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9kw7\" (UniqueName: \"kubernetes.io/projected/13c7681e-c718-4c6a-9646-aba9f96018b0-kube-api-access-v9kw7\") pod \"octavia-persistence-db-create-jrf2m\" (UID: \"13c7681e-c718-4c6a-9646-aba9f96018b0\") " pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.401748 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:19 crc kubenswrapper[5049]: I0127 18:36:19.901260 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-jrf2m"] Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.094444 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-abd2-account-create-update-psv4r"] Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.098018 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.104882 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.107380 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-abd2-account-create-update-psv4r"] Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.121661 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvnhd\" (UniqueName: \"kubernetes.io/projected/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-kube-api-access-nvnhd\") pod \"octavia-abd2-account-create-update-psv4r\" (UID: \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\") " pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.121935 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-operator-scripts\") pod \"octavia-abd2-account-create-update-psv4r\" (UID: \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\") " pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.224422 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-operator-scripts\") pod \"octavia-abd2-account-create-update-psv4r\" (UID: \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\") " pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.224984 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvnhd\" (UniqueName: \"kubernetes.io/projected/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-kube-api-access-nvnhd\") pod \"octavia-abd2-account-create-update-psv4r\" (UID: \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\") " pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.225219 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-operator-scripts\") pod \"octavia-abd2-account-create-update-psv4r\" (UID: \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\") " pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.257392 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvnhd\" (UniqueName: \"kubernetes.io/projected/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-kube-api-access-nvnhd\") pod \"octavia-abd2-account-create-update-psv4r\" (UID: \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\") " pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.416404 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.769770 5049 generic.go:334] "Generic (PLEG): container finished" podID="13c7681e-c718-4c6a-9646-aba9f96018b0" containerID="f682a7222239980702714a322a48e281797f6e25f04a38e531d90df3412d53da" exitCode=0 Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.769854 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-jrf2m" event={"ID":"13c7681e-c718-4c6a-9646-aba9f96018b0","Type":"ContainerDied","Data":"f682a7222239980702714a322a48e281797f6e25f04a38e531d90df3412d53da"} Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.770211 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-jrf2m" event={"ID":"13c7681e-c718-4c6a-9646-aba9f96018b0","Type":"ContainerStarted","Data":"72646cc460c5e72180ea59add7a0d8a4784080e5a7df71e882b8a6c74669ff69"} Jan 27 18:36:20 crc kubenswrapper[5049]: I0127 18:36:20.859229 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-abd2-account-create-update-psv4r"] Jan 27 18:36:20 crc kubenswrapper[5049]: W0127 18:36:20.859931 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a7eb388_c1a3_4277_b14c_1fbb8eb4bf11.slice/crio-147d2540c7358c9a395071266201d25e5501eecade639000f8cc41ad1d2fc6e9 WatchSource:0}: Error finding container 147d2540c7358c9a395071266201d25e5501eecade639000f8cc41ad1d2fc6e9: Status 404 returned error can't find the container with id 147d2540c7358c9a395071266201d25e5501eecade639000f8cc41ad1d2fc6e9 Jan 27 18:36:21 crc kubenswrapper[5049]: I0127 18:36:21.780119 5049 generic.go:334] "Generic (PLEG): container finished" podID="7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11" containerID="907be253e26f102f604d88a6065405b2781f4cad077fe45d739c061608fc9cf6" exitCode=0 Jan 27 18:36:21 crc kubenswrapper[5049]: I0127 18:36:21.780202 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-abd2-account-create-update-psv4r" event={"ID":"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11","Type":"ContainerDied","Data":"907be253e26f102f604d88a6065405b2781f4cad077fe45d739c061608fc9cf6"} Jan 27 18:36:21 crc kubenswrapper[5049]: I0127 18:36:21.780353 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-abd2-account-create-update-psv4r" event={"ID":"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11","Type":"ContainerStarted","Data":"147d2540c7358c9a395071266201d25e5501eecade639000f8cc41ad1d2fc6e9"} Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.123717 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.266613 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9kw7\" (UniqueName: \"kubernetes.io/projected/13c7681e-c718-4c6a-9646-aba9f96018b0-kube-api-access-v9kw7\") pod \"13c7681e-c718-4c6a-9646-aba9f96018b0\" (UID: \"13c7681e-c718-4c6a-9646-aba9f96018b0\") " Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.266655 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c7681e-c718-4c6a-9646-aba9f96018b0-operator-scripts\") pod \"13c7681e-c718-4c6a-9646-aba9f96018b0\" (UID: \"13c7681e-c718-4c6a-9646-aba9f96018b0\") " Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.267362 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c7681e-c718-4c6a-9646-aba9f96018b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13c7681e-c718-4c6a-9646-aba9f96018b0" (UID: "13c7681e-c718-4c6a-9646-aba9f96018b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.272325 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c7681e-c718-4c6a-9646-aba9f96018b0-kube-api-access-v9kw7" (OuterVolumeSpecName: "kube-api-access-v9kw7") pod "13c7681e-c718-4c6a-9646-aba9f96018b0" (UID: "13c7681e-c718-4c6a-9646-aba9f96018b0"). InnerVolumeSpecName "kube-api-access-v9kw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.368866 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9kw7\" (UniqueName: \"kubernetes.io/projected/13c7681e-c718-4c6a-9646-aba9f96018b0-kube-api-access-v9kw7\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.368902 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c7681e-c718-4c6a-9646-aba9f96018b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.792118 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-jrf2m" Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.792128 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-jrf2m" event={"ID":"13c7681e-c718-4c6a-9646-aba9f96018b0","Type":"ContainerDied","Data":"72646cc460c5e72180ea59add7a0d8a4784080e5a7df71e882b8a6c74669ff69"} Jan 27 18:36:22 crc kubenswrapper[5049]: I0127 18:36:22.794377 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72646cc460c5e72180ea59add7a0d8a4784080e5a7df71e882b8a6c74669ff69" Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.166838 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.294133 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-operator-scripts\") pod \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\" (UID: \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\") " Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.294282 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvnhd\" (UniqueName: \"kubernetes.io/projected/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-kube-api-access-nvnhd\") pod \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\" (UID: \"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11\") " Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.294961 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11" (UID: "7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.299969 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-kube-api-access-nvnhd" (OuterVolumeSpecName: "kube-api-access-nvnhd") pod "7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11" (UID: "7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11"). InnerVolumeSpecName "kube-api-access-nvnhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.397129 5049 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.397188 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvnhd\" (UniqueName: \"kubernetes.io/projected/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11-kube-api-access-nvnhd\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.819286 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-abd2-account-create-update-psv4r" event={"ID":"7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11","Type":"ContainerDied","Data":"147d2540c7358c9a395071266201d25e5501eecade639000f8cc41ad1d2fc6e9"} Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.819345 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="147d2540c7358c9a395071266201d25e5501eecade639000f8cc41ad1d2fc6e9" Jan 27 18:36:23 crc kubenswrapper[5049]: I0127 18:36:23.819367 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-abd2-account-create-update-psv4r" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.191768 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-5f6787874b-szhcb"] Jan 27 18:36:26 crc kubenswrapper[5049]: E0127 18:36:26.192614 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c7681e-c718-4c6a-9646-aba9f96018b0" containerName="mariadb-database-create" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.192630 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c7681e-c718-4c6a-9646-aba9f96018b0" containerName="mariadb-database-create" Jan 27 18:36:26 crc kubenswrapper[5049]: E0127 18:36:26.192676 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11" containerName="mariadb-account-create-update" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.192684 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11" containerName="mariadb-account-create-update" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.192925 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c7681e-c718-4c6a-9646-aba9f96018b0" containerName="mariadb-database-create" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.192954 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11" containerName="mariadb-account-create-update" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.194641 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.197986 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.198360 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-zfjts" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.198599 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.201188 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5f6787874b-szhcb"] Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.355552 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-combined-ca-bundle\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.355653 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-config-data-merged\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.355744 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-octavia-run\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.355796 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-scripts\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.355861 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-config-data\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.469749 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-combined-ca-bundle\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.470135 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-config-data-merged\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.470174 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-octavia-run\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.470210 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-scripts\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.470281 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-config-data\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.471478 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-config-data-merged\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.471779 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-octavia-run\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.476030 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-combined-ca-bundle\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.476159 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-config-data\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.480049 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c-scripts\") pod \"octavia-api-5f6787874b-szhcb\" (UID: \"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c\") " pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:26 crc kubenswrapper[5049]: I0127 18:36:26.518005 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:27 crc kubenswrapper[5049]: I0127 18:36:27.039427 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-5f6787874b-szhcb"] Jan 27 18:36:27 crc kubenswrapper[5049]: I0127 18:36:27.869859 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5f6787874b-szhcb" event={"ID":"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c","Type":"ContainerStarted","Data":"af5a030a49a0ce0ac9a37400690f9f0a8fb8ebde4a27d98227c2bfa62d53658e"} Jan 27 18:36:35 crc kubenswrapper[5049]: I0127 18:36:35.952318 5049 generic.go:334] "Generic (PLEG): container finished" podID="ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c" containerID="9503ef3adcb39b74b0b34f607d90fbc8b524075ff5119f8dd5825a04244bd00e" exitCode=0 Jan 27 18:36:35 crc kubenswrapper[5049]: I0127 18:36:35.952374 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5f6787874b-szhcb" event={"ID":"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c","Type":"ContainerDied","Data":"9503ef3adcb39b74b0b34f607d90fbc8b524075ff5119f8dd5825a04244bd00e"} Jan 27 18:36:36 crc kubenswrapper[5049]: I0127 18:36:36.966504 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5f6787874b-szhcb" event={"ID":"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c","Type":"ContainerStarted","Data":"215db3bc2fb9b0cada925c74c0255029b2ecc4c086b2e7b3fd89689044afc9bb"} Jan 27 18:36:36 crc kubenswrapper[5049]: I0127 18:36:36.968175 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-5f6787874b-szhcb" event={"ID":"ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c","Type":"ContainerStarted","Data":"404d887804952025f84f9b8f394a6d7578e58df904cfd6183b644b74300b2227"} Jan 27 18:36:36 crc kubenswrapper[5049]: I0127 18:36:36.970039 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:36 crc kubenswrapper[5049]: I0127 18:36:36.970157 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:37 crc kubenswrapper[5049]: I0127 18:36:37.011282 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-5f6787874b-szhcb" podStartSLOduration=2.982847612 podStartE2EDuration="11.011263851s" podCreationTimestamp="2026-01-27 18:36:26 +0000 UTC" firstStartedPulling="2026-01-27 18:36:27.046230015 +0000 UTC m=+5962.145203564" lastFinishedPulling="2026-01-27 18:36:35.074646254 +0000 UTC m=+5970.173619803" observedRunningTime="2026-01-27 18:36:37.006042514 +0000 UTC m=+5972.105016083" watchObservedRunningTime="2026-01-27 18:36:37.011263851 +0000 UTC m=+5972.110237400" Jan 27 18:36:39 crc kubenswrapper[5049]: I0127 18:36:39.193305 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-grm4c" podUID="30bb024b-8bc7-45cf-b794-5b9039e4b334" containerName="ovn-controller" probeResult="failure" output=< Jan 27 18:36:39 crc kubenswrapper[5049]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 18:36:39 crc kubenswrapper[5049]: > Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.537080 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tzscp"] Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.540112 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.566860 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzscp"] Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.606462 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-842tn\" (UniqueName: \"kubernetes.io/projected/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-kube-api-access-842tn\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.606869 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-catalog-content\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.606929 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-utilities\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.708509 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-842tn\" (UniqueName: \"kubernetes.io/projected/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-kube-api-access-842tn\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.708627 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-catalog-content\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.708692 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-utilities\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.709170 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-utilities\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.709643 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-catalog-content\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.730279 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-842tn\" (UniqueName: \"kubernetes.io/projected/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-kube-api-access-842tn\") pod \"redhat-operators-tzscp\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:42 crc kubenswrapper[5049]: I0127 18:36:42.873269 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.376613 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-zbfsr"] Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.378414 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.382260 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.383595 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.384483 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.408503 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-zbfsr"] Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.461100 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tzscp"] Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.522791 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/931851bb-1d01-4828-9eb9-a45836710020-config-data\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.523423 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/931851bb-1d01-4828-9eb9-a45836710020-hm-ports\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.523537 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/931851bb-1d01-4828-9eb9-a45836710020-scripts\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.523738 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/931851bb-1d01-4828-9eb9-a45836710020-config-data-merged\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.625146 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/931851bb-1d01-4828-9eb9-a45836710020-config-data-merged\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.625229 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/931851bb-1d01-4828-9eb9-a45836710020-config-data\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.625301 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/931851bb-1d01-4828-9eb9-a45836710020-hm-ports\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.625329 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/931851bb-1d01-4828-9eb9-a45836710020-scripts\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.626807 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/931851bb-1d01-4828-9eb9-a45836710020-hm-ports\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.627219 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/931851bb-1d01-4828-9eb9-a45836710020-config-data-merged\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.633325 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/931851bb-1d01-4828-9eb9-a45836710020-config-data\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.633744 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/931851bb-1d01-4828-9eb9-a45836710020-scripts\") pod \"octavia-rsyslog-zbfsr\" (UID: \"931851bb-1d01-4828-9eb9-a45836710020\") " pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:43 crc kubenswrapper[5049]: I0127 18:36:43.712792 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.076706 5049 generic.go:334] "Generic (PLEG): container finished" podID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerID="55f782068cb53a54f13163cd399cf9a6079509b4078d72cb772c24f93728009f" exitCode=0 Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.076779 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzscp" event={"ID":"faaf8fb9-4f08-4d0a-b9c5-d516222e671d","Type":"ContainerDied","Data":"55f782068cb53a54f13163cd399cf9a6079509b4078d72cb772c24f93728009f"} Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.077042 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzscp" event={"ID":"faaf8fb9-4f08-4d0a-b9c5-d516222e671d","Type":"ContainerStarted","Data":"bd45f80ad48e12dadd14ca153a93ee95dbe031c100ddfb3be3afc3558fcd963d"} Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.295742 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.311937 5049 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-grm4c" podUID="30bb024b-8bc7-45cf-b794-5b9039e4b334" containerName="ovn-controller" probeResult="failure" output=< Jan 27 18:36:44 crc kubenswrapper[5049]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 18:36:44 crc kubenswrapper[5049]: > Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.312054 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-fqwht" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.437580 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-zbfsr"] Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.470605 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-grm4c-config-crznv"] Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.471973 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.490968 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.530946 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grm4c-config-crznv"] Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.540105 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-zbfsr"] Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.684817 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrp87\" (UniqueName: \"kubernetes.io/projected/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-kube-api-access-zrp87\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.686483 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run-ovn\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.686641 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-additional-scripts\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.686797 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-scripts\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.686917 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.687051 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-log-ovn\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.792390 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-additional-scripts\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.793460 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-scripts\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.793510 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.793597 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-log-ovn\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.793985 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrp87\" (UniqueName: \"kubernetes.io/projected/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-kube-api-access-zrp87\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.795558 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run-ovn\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.796333 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run-ovn\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.796713 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.796877 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-log-ovn\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.809268 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-scripts\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.812843 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-additional-scripts\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:44 crc kubenswrapper[5049]: I0127 18:36:44.816989 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrp87\" (UniqueName: \"kubernetes.io/projected/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-kube-api-access-zrp87\") pod \"ovn-controller-grm4c-config-crznv\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.050648 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-79lw4"] Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.052635 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.062945 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.063812 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-79lw4"] Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.094064 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-zbfsr" event={"ID":"931851bb-1d01-4828-9eb9-a45836710020","Type":"ContainerStarted","Data":"1a5fd9b61112748404f6fc2b22e01859a1262bb384c577d28d2b27e10f5b4a9e"} Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.098457 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzscp" event={"ID":"faaf8fb9-4f08-4d0a-b9c5-d516222e671d","Type":"ContainerStarted","Data":"f9b0901be974dcab6b301499a1923f495417102f307e5bbd63e0e188a958d900"} Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.100806 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-amphora-image\") pod \"octavia-image-upload-59f8cff499-79lw4\" (UID: \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\") " pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.101861 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-httpd-config\") pod \"octavia-image-upload-59f8cff499-79lw4\" (UID: \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\") " pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.112194 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.204391 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-amphora-image\") pod \"octavia-image-upload-59f8cff499-79lw4\" (UID: \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\") " pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.204482 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-httpd-config\") pod \"octavia-image-upload-59f8cff499-79lw4\" (UID: \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\") " pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.207031 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-amphora-image\") pod \"octavia-image-upload-59f8cff499-79lw4\" (UID: \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\") " pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.212720 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-httpd-config\") pod \"octavia-image-upload-59f8cff499-79lw4\" (UID: \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\") " pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.387277 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.744993 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grm4c-config-crznv"] Jan 27 18:36:45 crc kubenswrapper[5049]: I0127 18:36:45.977617 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-79lw4"] Jan 27 18:36:46 crc kubenswrapper[5049]: I0127 18:36:46.111877 5049 generic.go:334] "Generic (PLEG): container finished" podID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerID="f9b0901be974dcab6b301499a1923f495417102f307e5bbd63e0e188a958d900" exitCode=0 Jan 27 18:36:46 crc kubenswrapper[5049]: I0127 18:36:46.111919 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzscp" event={"ID":"faaf8fb9-4f08-4d0a-b9c5-d516222e671d","Type":"ContainerDied","Data":"f9b0901be974dcab6b301499a1923f495417102f307e5bbd63e0e188a958d900"} Jan 27 18:36:46 crc kubenswrapper[5049]: W0127 18:36:46.311362 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef1951d8_2ddb_4aa6_933b_ac010f6cf14b.slice/crio-62e3b1f9fd037647f9497ac2f4012e740b9f721baeeb999be3b2ac76d61e07f3 WatchSource:0}: Error finding container 62e3b1f9fd037647f9497ac2f4012e740b9f721baeeb999be3b2ac76d61e07f3: Status 404 returned error can't find the container with id 62e3b1f9fd037647f9497ac2f4012e740b9f721baeeb999be3b2ac76d61e07f3 Jan 27 18:36:46 crc kubenswrapper[5049]: W0127 18:36:46.315871 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc268b1fd_7dfb_4d81_b7aa_50fae27a3ef4.slice/crio-16b03476de3561b04722934f1fad5023dd56d83ebd20a24981ebae576a66ad7b WatchSource:0}: Error finding container 16b03476de3561b04722934f1fad5023dd56d83ebd20a24981ebae576a66ad7b: Status 404 returned error can't find the container with id 16b03476de3561b04722934f1fad5023dd56d83ebd20a24981ebae576a66ad7b Jan 27 18:36:46 crc kubenswrapper[5049]: I0127 18:36:46.864119 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:47 crc kubenswrapper[5049]: I0127 18:36:47.120761 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-79lw4" event={"ID":"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b","Type":"ContainerStarted","Data":"62e3b1f9fd037647f9497ac2f4012e740b9f721baeeb999be3b2ac76d61e07f3"} Jan 27 18:36:47 crc kubenswrapper[5049]: I0127 18:36:47.122142 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c-config-crznv" event={"ID":"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4","Type":"ContainerStarted","Data":"c36fb37279884d0b0cbe682dfc8c129a92681f1aacb0ce60120424ebada46e97"} Jan 27 18:36:47 crc kubenswrapper[5049]: I0127 18:36:47.122162 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c-config-crznv" event={"ID":"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4","Type":"ContainerStarted","Data":"16b03476de3561b04722934f1fad5023dd56d83ebd20a24981ebae576a66ad7b"} Jan 27 18:36:47 crc kubenswrapper[5049]: I0127 18:36:47.146445 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-grm4c-config-crznv" podStartSLOduration=3.146426191 podStartE2EDuration="3.146426191s" podCreationTimestamp="2026-01-27 18:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:36:47.13683998 +0000 UTC m=+5982.235813529" watchObservedRunningTime="2026-01-27 18:36:47.146426191 +0000 UTC m=+5982.245399740" Jan 27 18:36:47 crc kubenswrapper[5049]: I0127 18:36:47.781632 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:36:47 crc kubenswrapper[5049]: I0127 18:36:47.782008 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:36:47 crc kubenswrapper[5049]: I0127 18:36:47.865561 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-5f6787874b-szhcb" Jan 27 18:36:48 crc kubenswrapper[5049]: I0127 18:36:48.146835 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzscp" event={"ID":"faaf8fb9-4f08-4d0a-b9c5-d516222e671d","Type":"ContainerStarted","Data":"e220e0769f58124551f152d3cb68803a93177b66a1b6cf6a9db5171977f06516"} Jan 27 18:36:48 crc kubenswrapper[5049]: I0127 18:36:48.182013 5049 generic.go:334] "Generic (PLEG): container finished" podID="c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" containerID="c36fb37279884d0b0cbe682dfc8c129a92681f1aacb0ce60120424ebada46e97" exitCode=0 Jan 27 18:36:48 crc kubenswrapper[5049]: I0127 18:36:48.182113 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c-config-crznv" event={"ID":"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4","Type":"ContainerDied","Data":"c36fb37279884d0b0cbe682dfc8c129a92681f1aacb0ce60120424ebada46e97"} Jan 27 18:36:48 crc kubenswrapper[5049]: I0127 18:36:48.206878 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-zbfsr" event={"ID":"931851bb-1d01-4828-9eb9-a45836710020","Type":"ContainerStarted","Data":"62ae34790dd561e5693f9275327c462625405f32085e5c0c56babaef8080532e"} Jan 27 18:36:48 crc kubenswrapper[5049]: I0127 18:36:48.207742 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tzscp" podStartSLOduration=3.375810021 podStartE2EDuration="6.207721305s" podCreationTimestamp="2026-01-27 18:36:42 +0000 UTC" firstStartedPulling="2026-01-27 18:36:44.079474171 +0000 UTC m=+5979.178447720" lastFinishedPulling="2026-01-27 18:36:46.911385415 +0000 UTC m=+5982.010359004" observedRunningTime="2026-01-27 18:36:48.194142502 +0000 UTC m=+5983.293116071" watchObservedRunningTime="2026-01-27 18:36:48.207721305 +0000 UTC m=+5983.306694844" Jan 27 18:36:49 crc kubenswrapper[5049]: I0127 18:36:49.241145 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-grm4c" Jan 27 18:36:49 crc kubenswrapper[5049]: I0127 18:36:49.977048 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.041923 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run\") pod \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042032 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run" (OuterVolumeSpecName: "var-run") pod "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" (UID: "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042047 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrp87\" (UniqueName: \"kubernetes.io/projected/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-kube-api-access-zrp87\") pod \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042176 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-log-ovn\") pod \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042235 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-scripts\") pod \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042284 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-additional-scripts\") pod \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042333 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run-ovn\") pod \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\" (UID: \"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4\") " Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042369 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" (UID: "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042801 5049 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042821 5049 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.042855 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" (UID: "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.043420 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" (UID: "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.043781 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-scripts" (OuterVolumeSpecName: "scripts") pod "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" (UID: "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.047638 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-kube-api-access-zrp87" (OuterVolumeSpecName: "kube-api-access-zrp87") pod "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" (UID: "c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4"). InnerVolumeSpecName "kube-api-access-zrp87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.144441 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrp87\" (UniqueName: \"kubernetes.io/projected/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-kube-api-access-zrp87\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.144492 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.144506 5049 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.144520 5049 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.231146 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-grm4c-config-crznv"] Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.241947 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c-config-crznv" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.241944 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c-config-crznv" event={"ID":"c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4","Type":"ContainerDied","Data":"16b03476de3561b04722934f1fad5023dd56d83ebd20a24981ebae576a66ad7b"} Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.242090 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16b03476de3561b04722934f1fad5023dd56d83ebd20a24981ebae576a66ad7b" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.244368 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-grm4c-config-crznv"] Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.255598 5049 generic.go:334] "Generic (PLEG): container finished" podID="931851bb-1d01-4828-9eb9-a45836710020" containerID="62ae34790dd561e5693f9275327c462625405f32085e5c0c56babaef8080532e" exitCode=0 Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.255649 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-zbfsr" event={"ID":"931851bb-1d01-4828-9eb9-a45836710020","Type":"ContainerDied","Data":"62ae34790dd561e5693f9275327c462625405f32085e5c0c56babaef8080532e"} Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.382431 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-grm4c-config-v6rp2"] Jan 27 18:36:50 crc kubenswrapper[5049]: E0127 18:36:50.382981 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" containerName="ovn-config" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.383043 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" containerName="ovn-config" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.383277 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" containerName="ovn-config" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.384101 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.387420 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.394476 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grm4c-config-v6rp2"] Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.448783 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run-ovn\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.449532 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-scripts\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.449787 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-log-ovn\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.449846 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.449951 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtgb8\" (UniqueName: \"kubernetes.io/projected/7d9115ff-9d8e-4618-bed9-2e69c2732780-kube-api-access-rtgb8\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.449987 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-additional-scripts\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.551755 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-scripts\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.551888 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-log-ovn\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.551934 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.551987 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtgb8\" (UniqueName: \"kubernetes.io/projected/7d9115ff-9d8e-4618-bed9-2e69c2732780-kube-api-access-rtgb8\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.552024 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-additional-scripts\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.552112 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run-ovn\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.552465 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run-ovn\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.552529 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.552543 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-log-ovn\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.553405 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-additional-scripts\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.555012 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-scripts\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.579561 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtgb8\" (UniqueName: \"kubernetes.io/projected/7d9115ff-9d8e-4618-bed9-2e69c2732780-kube-api-access-rtgb8\") pod \"ovn-controller-grm4c-config-v6rp2\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:50 crc kubenswrapper[5049]: I0127 18:36:50.710617 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:36:51 crc kubenswrapper[5049]: I0127 18:36:51.298565 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-grm4c-config-v6rp2"] Jan 27 18:36:51 crc kubenswrapper[5049]: I0127 18:36:51.657960 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4" path="/var/lib/kubelet/pods/c268b1fd-7dfb-4d81-b7aa-50fae27a3ef4/volumes" Jan 27 18:36:52 crc kubenswrapper[5049]: I0127 18:36:52.299008 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c-config-v6rp2" event={"ID":"7d9115ff-9d8e-4618-bed9-2e69c2732780","Type":"ContainerStarted","Data":"da3083e155ad5bfa8f8f6b445b482cdd67c8b0ede50bb5837bd398ac00de15c1"} Jan 27 18:36:52 crc kubenswrapper[5049]: I0127 18:36:52.299319 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c-config-v6rp2" event={"ID":"7d9115ff-9d8e-4618-bed9-2e69c2732780","Type":"ContainerStarted","Data":"979ec2c39b58ac291562dab37204bf4268adeae0a51a442617a9b2dfa7e9d135"} Jan 27 18:36:52 crc kubenswrapper[5049]: I0127 18:36:52.323710 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-grm4c-config-v6rp2" podStartSLOduration=2.323695812 podStartE2EDuration="2.323695812s" podCreationTimestamp="2026-01-27 18:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:36:52.321399487 +0000 UTC m=+5987.420373056" watchObservedRunningTime="2026-01-27 18:36:52.323695812 +0000 UTC m=+5987.422669361" Jan 27 18:36:52 crc kubenswrapper[5049]: I0127 18:36:52.873580 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:52 crc kubenswrapper[5049]: I0127 18:36:52.873635 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.184522 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-qg6hr"] Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.186425 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.195855 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-qg6hr"] Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.197526 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.311266 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.311627 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-scripts\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.311796 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-combined-ca-bundle\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.311839 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data-merged\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.413640 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-combined-ca-bundle\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.413729 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data-merged\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.414409 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data-merged\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.414531 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.414989 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-scripts\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.433622 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-combined-ca-bundle\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.433641 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-scripts\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.433699 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data\") pod \"octavia-db-sync-qg6hr\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.510313 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:36:53 crc kubenswrapper[5049]: I0127 18:36:53.920559 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzscp" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" probeResult="failure" output=< Jan 27 18:36:53 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:36:53 crc kubenswrapper[5049]: > Jan 27 18:36:54 crc kubenswrapper[5049]: I0127 18:36:54.318098 5049 generic.go:334] "Generic (PLEG): container finished" podID="7d9115ff-9d8e-4618-bed9-2e69c2732780" containerID="da3083e155ad5bfa8f8f6b445b482cdd67c8b0ede50bb5837bd398ac00de15c1" exitCode=0 Jan 27 18:36:54 crc kubenswrapper[5049]: I0127 18:36:54.318234 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c-config-v6rp2" event={"ID":"7d9115ff-9d8e-4618-bed9-2e69c2732780","Type":"ContainerDied","Data":"da3083e155ad5bfa8f8f6b445b482cdd67c8b0ede50bb5837bd398ac00de15c1"} Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.401314 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-grm4c-config-v6rp2" event={"ID":"7d9115ff-9d8e-4618-bed9-2e69c2732780","Type":"ContainerDied","Data":"979ec2c39b58ac291562dab37204bf4268adeae0a51a442617a9b2dfa7e9d135"} Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.402234 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979ec2c39b58ac291562dab37204bf4268adeae0a51a442617a9b2dfa7e9d135" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.430815 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.620504 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtgb8\" (UniqueName: \"kubernetes.io/projected/7d9115ff-9d8e-4618-bed9-2e69c2732780-kube-api-access-rtgb8\") pod \"7d9115ff-9d8e-4618-bed9-2e69c2732780\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.620710 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-additional-scripts\") pod \"7d9115ff-9d8e-4618-bed9-2e69c2732780\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.621218 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-log-ovn\") pod \"7d9115ff-9d8e-4618-bed9-2e69c2732780\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.621802 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run-ovn\") pod \"7d9115ff-9d8e-4618-bed9-2e69c2732780\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.621341 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7d9115ff-9d8e-4618-bed9-2e69c2732780" (UID: "7d9115ff-9d8e-4618-bed9-2e69c2732780"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.621868 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-scripts\") pod \"7d9115ff-9d8e-4618-bed9-2e69c2732780\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.621461 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7d9115ff-9d8e-4618-bed9-2e69c2732780" (UID: "7d9115ff-9d8e-4618-bed9-2e69c2732780"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.621912 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7d9115ff-9d8e-4618-bed9-2e69c2732780" (UID: "7d9115ff-9d8e-4618-bed9-2e69c2732780"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.621993 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run\") pod \"7d9115ff-9d8e-4618-bed9-2e69c2732780\" (UID: \"7d9115ff-9d8e-4618-bed9-2e69c2732780\") " Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.622087 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run" (OuterVolumeSpecName: "var-run") pod "7d9115ff-9d8e-4618-bed9-2e69c2732780" (UID: "7d9115ff-9d8e-4618-bed9-2e69c2732780"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.622651 5049 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.622702 5049 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.622713 5049 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.622724 5049 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d9115ff-9d8e-4618-bed9-2e69c2732780-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.622942 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-scripts" (OuterVolumeSpecName: "scripts") pod "7d9115ff-9d8e-4618-bed9-2e69c2732780" (UID: "7d9115ff-9d8e-4618-bed9-2e69c2732780"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.626754 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9115ff-9d8e-4618-bed9-2e69c2732780-kube-api-access-rtgb8" (OuterVolumeSpecName: "kube-api-access-rtgb8") pod "7d9115ff-9d8e-4618-bed9-2e69c2732780" (UID: "7d9115ff-9d8e-4618-bed9-2e69c2732780"). InnerVolumeSpecName "kube-api-access-rtgb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.724168 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d9115ff-9d8e-4618-bed9-2e69c2732780-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:01 crc kubenswrapper[5049]: I0127 18:37:01.724202 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtgb8\" (UniqueName: \"kubernetes.io/projected/7d9115ff-9d8e-4618-bed9-2e69c2732780-kube-api-access-rtgb8\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:02 crc kubenswrapper[5049]: I0127 18:37:02.197691 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-qg6hr"] Jan 27 18:37:02 crc kubenswrapper[5049]: W0127 18:37:02.204564 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2aedefdb_f36a_4658_941f_3eac0242a8c9.slice/crio-3ca40e13c66d98895d697100639c621b65264291cea89622684b950bb1f4718d WatchSource:0}: Error finding container 3ca40e13c66d98895d697100639c621b65264291cea89622684b950bb1f4718d: Status 404 returned error can't find the container with id 3ca40e13c66d98895d697100639c621b65264291cea89622684b950bb1f4718d Jan 27 18:37:02 crc kubenswrapper[5049]: I0127 18:37:02.419738 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-grm4c-config-v6rp2" Jan 27 18:37:02 crc kubenswrapper[5049]: I0127 18:37:02.420524 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-qg6hr" event={"ID":"2aedefdb-f36a-4658-941f-3eac0242a8c9","Type":"ContainerStarted","Data":"3ca40e13c66d98895d697100639c621b65264291cea89622684b950bb1f4718d"} Jan 27 18:37:02 crc kubenswrapper[5049]: I0127 18:37:02.544180 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-grm4c-config-v6rp2"] Jan 27 18:37:02 crc kubenswrapper[5049]: E0127 18:37:02.560527 5049 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/gthiemonge/octavia-amphora-image:latest" Jan 27 18:37:02 crc kubenswrapper[5049]: E0127 18:37:02.560698 5049 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/gthiemonge/octavia-amphora-image,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/usr/local/apache2/htdocs,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-59f8cff499-79lw4_openstack(ef1951d8-2ddb-4aa6-933b-ac010f6cf14b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 18:37:02 crc kubenswrapper[5049]: E0127 18:37:02.563539 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/octavia-image-upload-59f8cff499-79lw4" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" Jan 27 18:37:02 crc kubenswrapper[5049]: I0127 18:37:02.564435 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-grm4c-config-v6rp2"] Jan 27 18:37:03 crc kubenswrapper[5049]: I0127 18:37:03.433063 5049 generic.go:334] "Generic (PLEG): container finished" podID="2aedefdb-f36a-4658-941f-3eac0242a8c9" containerID="3c7d16de5aed4260472af85fcc9d65344873dab7a88a997ce70e94a026bc57f7" exitCode=0 Jan 27 18:37:03 crc kubenswrapper[5049]: I0127 18:37:03.433117 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-qg6hr" event={"ID":"2aedefdb-f36a-4658-941f-3eac0242a8c9","Type":"ContainerDied","Data":"3c7d16de5aed4260472af85fcc9d65344873dab7a88a997ce70e94a026bc57f7"} Jan 27 18:37:03 crc kubenswrapper[5049]: I0127 18:37:03.437101 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-zbfsr" event={"ID":"931851bb-1d01-4828-9eb9-a45836710020","Type":"ContainerStarted","Data":"93ac5eca6d204d592c6019df632524c2b4c711056370d2224c015a919505ca11"} Jan 27 18:37:03 crc kubenswrapper[5049]: I0127 18:37:03.437414 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:37:03 crc kubenswrapper[5049]: E0127 18:37:03.438250 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/gthiemonge/octavia-amphora-image\\\"\"" pod="openstack/octavia-image-upload-59f8cff499-79lw4" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" Jan 27 18:37:03 crc kubenswrapper[5049]: I0127 18:37:03.499691 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-zbfsr" podStartSLOduration=2.744958232 podStartE2EDuration="20.499655336s" podCreationTimestamp="2026-01-27 18:36:43 +0000 UTC" firstStartedPulling="2026-01-27 18:36:44.443481508 +0000 UTC m=+5979.542455057" lastFinishedPulling="2026-01-27 18:37:02.198178612 +0000 UTC m=+5997.297152161" observedRunningTime="2026-01-27 18:37:03.498460532 +0000 UTC m=+5998.597434091" watchObservedRunningTime="2026-01-27 18:37:03.499655336 +0000 UTC m=+5998.598628875" Jan 27 18:37:03 crc kubenswrapper[5049]: I0127 18:37:03.658777 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9115ff-9d8e-4618-bed9-2e69c2732780" path="/var/lib/kubelet/pods/7d9115ff-9d8e-4618-bed9-2e69c2732780/volumes" Jan 27 18:37:03 crc kubenswrapper[5049]: I0127 18:37:03.936349 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzscp" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" probeResult="failure" output=< Jan 27 18:37:03 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:37:03 crc kubenswrapper[5049]: > Jan 27 18:37:04 crc kubenswrapper[5049]: I0127 18:37:04.445999 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-qg6hr" event={"ID":"2aedefdb-f36a-4658-941f-3eac0242a8c9","Type":"ContainerStarted","Data":"25a88ec595bf2a75caacd49036e51612a39616ba6c9ea00bf6554e2a01c4c5ae"} Jan 27 18:37:04 crc kubenswrapper[5049]: I0127 18:37:04.470039 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-qg6hr" podStartSLOduration=11.470022413 podStartE2EDuration="11.470022413s" podCreationTimestamp="2026-01-27 18:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:37:04.459834885 +0000 UTC m=+5999.558808454" watchObservedRunningTime="2026-01-27 18:37:04.470022413 +0000 UTC m=+5999.568995962" Jan 27 18:37:09 crc kubenswrapper[5049]: I0127 18:37:09.503252 5049 generic.go:334] "Generic (PLEG): container finished" podID="2aedefdb-f36a-4658-941f-3eac0242a8c9" containerID="25a88ec595bf2a75caacd49036e51612a39616ba6c9ea00bf6554e2a01c4c5ae" exitCode=0 Jan 27 18:37:09 crc kubenswrapper[5049]: I0127 18:37:09.503374 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-qg6hr" event={"ID":"2aedefdb-f36a-4658-941f-3eac0242a8c9","Type":"ContainerDied","Data":"25a88ec595bf2a75caacd49036e51612a39616ba6c9ea00bf6554e2a01c4c5ae"} Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.890993 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.929738 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data-merged\") pod \"2aedefdb-f36a-4658-941f-3eac0242a8c9\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.930281 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data\") pod \"2aedefdb-f36a-4658-941f-3eac0242a8c9\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.930333 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-combined-ca-bundle\") pod \"2aedefdb-f36a-4658-941f-3eac0242a8c9\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.930409 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-scripts\") pod \"2aedefdb-f36a-4658-941f-3eac0242a8c9\" (UID: \"2aedefdb-f36a-4658-941f-3eac0242a8c9\") " Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.936780 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-scripts" (OuterVolumeSpecName: "scripts") pod "2aedefdb-f36a-4658-941f-3eac0242a8c9" (UID: "2aedefdb-f36a-4658-941f-3eac0242a8c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.939861 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data" (OuterVolumeSpecName: "config-data") pod "2aedefdb-f36a-4658-941f-3eac0242a8c9" (UID: "2aedefdb-f36a-4658-941f-3eac0242a8c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.959223 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "2aedefdb-f36a-4658-941f-3eac0242a8c9" (UID: "2aedefdb-f36a-4658-941f-3eac0242a8c9"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:37:10 crc kubenswrapper[5049]: I0127 18:37:10.966068 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aedefdb-f36a-4658-941f-3eac0242a8c9" (UID: "2aedefdb-f36a-4658-941f-3eac0242a8c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:11 crc kubenswrapper[5049]: I0127 18:37:11.032072 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:11 crc kubenswrapper[5049]: I0127 18:37:11.032331 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:11 crc kubenswrapper[5049]: I0127 18:37:11.032477 5049 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aedefdb-f36a-4658-941f-3eac0242a8c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:11 crc kubenswrapper[5049]: I0127 18:37:11.032547 5049 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/2aedefdb-f36a-4658-941f-3eac0242a8c9-config-data-merged\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:11 crc kubenswrapper[5049]: I0127 18:37:11.525964 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-qg6hr" event={"ID":"2aedefdb-f36a-4658-941f-3eac0242a8c9","Type":"ContainerDied","Data":"3ca40e13c66d98895d697100639c621b65264291cea89622684b950bb1f4718d"} Jan 27 18:37:11 crc kubenswrapper[5049]: I0127 18:37:11.526006 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ca40e13c66d98895d697100639c621b65264291cea89622684b950bb1f4718d" Jan 27 18:37:11 crc kubenswrapper[5049]: I0127 18:37:11.526075 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-qg6hr" Jan 27 18:37:13 crc kubenswrapper[5049]: I0127 18:37:13.741306 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-zbfsr" Jan 27 18:37:13 crc kubenswrapper[5049]: I0127 18:37:13.922264 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzscp" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" probeResult="failure" output=< Jan 27 18:37:13 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:37:13 crc kubenswrapper[5049]: > Jan 27 18:37:17 crc kubenswrapper[5049]: I0127 18:37:17.576788 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-79lw4" event={"ID":"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b","Type":"ContainerStarted","Data":"29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93"} Jan 27 18:37:17 crc kubenswrapper[5049]: I0127 18:37:17.611319 5049 scope.go:117] "RemoveContainer" containerID="c785cd7c07377c39d96c952eb89eb570b13cdfad47e68265288a3e4873c6c2c8" Jan 27 18:37:17 crc kubenswrapper[5049]: I0127 18:37:17.781370 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:37:17 crc kubenswrapper[5049]: I0127 18:37:17.781749 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:37:17 crc kubenswrapper[5049]: I0127 18:37:17.781804 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:37:17 crc kubenswrapper[5049]: I0127 18:37:17.782579 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:37:17 crc kubenswrapper[5049]: I0127 18:37:17.782631 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" gracePeriod=600 Jan 27 18:37:17 crc kubenswrapper[5049]: E0127 18:37:17.901758 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:37:18 crc kubenswrapper[5049]: I0127 18:37:18.591937 5049 generic.go:334] "Generic (PLEG): container finished" podID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" containerID="29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93" exitCode=0 Jan 27 18:37:18 crc kubenswrapper[5049]: I0127 18:37:18.592036 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-79lw4" event={"ID":"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b","Type":"ContainerDied","Data":"29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93"} Jan 27 18:37:18 crc kubenswrapper[5049]: I0127 18:37:18.595520 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" exitCode=0 Jan 27 18:37:18 crc kubenswrapper[5049]: I0127 18:37:18.595601 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8"} Jan 27 18:37:18 crc kubenswrapper[5049]: I0127 18:37:18.595852 5049 scope.go:117] "RemoveContainer" containerID="0f0c85dbb74448a363ed0be73b30a973046d8019ad413c4249fdaeac7a5b4439" Jan 27 18:37:18 crc kubenswrapper[5049]: I0127 18:37:18.596849 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:37:18 crc kubenswrapper[5049]: E0127 18:37:18.597131 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:37:20 crc kubenswrapper[5049]: I0127 18:37:20.623971 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-79lw4" event={"ID":"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b","Type":"ContainerStarted","Data":"87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e"} Jan 27 18:37:20 crc kubenswrapper[5049]: I0127 18:37:20.657665 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-79lw4" podStartSLOduration=2.047119245 podStartE2EDuration="35.657644162s" podCreationTimestamp="2026-01-27 18:36:45 +0000 UTC" firstStartedPulling="2026-01-27 18:36:46.313971378 +0000 UTC m=+5981.412944927" lastFinishedPulling="2026-01-27 18:37:19.924496295 +0000 UTC m=+6015.023469844" observedRunningTime="2026-01-27 18:37:20.639524552 +0000 UTC m=+6015.738498101" watchObservedRunningTime="2026-01-27 18:37:20.657644162 +0000 UTC m=+6015.756617711" Jan 27 18:37:23 crc kubenswrapper[5049]: I0127 18:37:23.924795 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzscp" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" probeResult="failure" output=< Jan 27 18:37:23 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:37:23 crc kubenswrapper[5049]: > Jan 27 18:37:24 crc kubenswrapper[5049]: I0127 18:37:24.922526 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dxb8s"] Jan 27 18:37:24 crc kubenswrapper[5049]: E0127 18:37:24.923306 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aedefdb-f36a-4658-941f-3eac0242a8c9" containerName="octavia-db-sync" Jan 27 18:37:24 crc kubenswrapper[5049]: I0127 18:37:24.923322 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aedefdb-f36a-4658-941f-3eac0242a8c9" containerName="octavia-db-sync" Jan 27 18:37:24 crc kubenswrapper[5049]: E0127 18:37:24.923360 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aedefdb-f36a-4658-941f-3eac0242a8c9" containerName="init" Jan 27 18:37:24 crc kubenswrapper[5049]: I0127 18:37:24.923368 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aedefdb-f36a-4658-941f-3eac0242a8c9" containerName="init" Jan 27 18:37:24 crc kubenswrapper[5049]: E0127 18:37:24.923379 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9115ff-9d8e-4618-bed9-2e69c2732780" containerName="ovn-config" Jan 27 18:37:24 crc kubenswrapper[5049]: I0127 18:37:24.923387 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9115ff-9d8e-4618-bed9-2e69c2732780" containerName="ovn-config" Jan 27 18:37:24 crc kubenswrapper[5049]: I0127 18:37:24.923613 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aedefdb-f36a-4658-941f-3eac0242a8c9" containerName="octavia-db-sync" Jan 27 18:37:24 crc kubenswrapper[5049]: I0127 18:37:24.923635 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9115ff-9d8e-4618-bed9-2e69c2732780" containerName="ovn-config" Jan 27 18:37:24 crc kubenswrapper[5049]: I0127 18:37:24.925084 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:24 crc kubenswrapper[5049]: I0127 18:37:24.946521 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxb8s"] Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.034231 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-catalog-content\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.034757 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-utilities\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.035012 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbzh7\" (UniqueName: \"kubernetes.io/projected/920d2e54-21f8-49b4-b677-9bdd8e442323-kube-api-access-xbzh7\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.137076 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-catalog-content\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.137595 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-catalog-content\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.137730 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-utilities\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.137829 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbzh7\" (UniqueName: \"kubernetes.io/projected/920d2e54-21f8-49b4-b677-9bdd8e442323-kube-api-access-xbzh7\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.138169 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-utilities\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.164424 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbzh7\" (UniqueName: \"kubernetes.io/projected/920d2e54-21f8-49b4-b677-9bdd8e442323-kube-api-access-xbzh7\") pod \"redhat-marketplace-dxb8s\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:25 crc kubenswrapper[5049]: I0127 18:37:25.248444 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:26 crc kubenswrapper[5049]: I0127 18:37:26.087591 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxb8s"] Jan 27 18:37:26 crc kubenswrapper[5049]: I0127 18:37:26.678783 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxb8s" event={"ID":"920d2e54-21f8-49b4-b677-9bdd8e442323","Type":"ContainerStarted","Data":"1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63"} Jan 27 18:37:26 crc kubenswrapper[5049]: I0127 18:37:26.679006 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxb8s" event={"ID":"920d2e54-21f8-49b4-b677-9bdd8e442323","Type":"ContainerStarted","Data":"76f38473fede700751eb33349aeacd706605a15fe2a3dd82a3f45c0891171983"} Jan 27 18:37:27 crc kubenswrapper[5049]: I0127 18:37:27.687171 5049 generic.go:334] "Generic (PLEG): container finished" podID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerID="1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63" exitCode=0 Jan 27 18:37:27 crc kubenswrapper[5049]: I0127 18:37:27.687280 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxb8s" event={"ID":"920d2e54-21f8-49b4-b677-9bdd8e442323","Type":"ContainerDied","Data":"1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63"} Jan 27 18:37:29 crc kubenswrapper[5049]: I0127 18:37:29.646915 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:37:29 crc kubenswrapper[5049]: E0127 18:37:29.647589 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:37:30 crc kubenswrapper[5049]: I0127 18:37:30.710987 5049 generic.go:334] "Generic (PLEG): container finished" podID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerID="3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83" exitCode=0 Jan 27 18:37:30 crc kubenswrapper[5049]: I0127 18:37:30.711543 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxb8s" event={"ID":"920d2e54-21f8-49b4-b677-9bdd8e442323","Type":"ContainerDied","Data":"3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83"} Jan 27 18:37:32 crc kubenswrapper[5049]: I0127 18:37:32.729306 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxb8s" event={"ID":"920d2e54-21f8-49b4-b677-9bdd8e442323","Type":"ContainerStarted","Data":"02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9"} Jan 27 18:37:32 crc kubenswrapper[5049]: I0127 18:37:32.753720 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dxb8s" podStartSLOduration=5.123470212 podStartE2EDuration="8.753691686s" podCreationTimestamp="2026-01-27 18:37:24 +0000 UTC" firstStartedPulling="2026-01-27 18:37:27.690279895 +0000 UTC m=+6022.789253444" lastFinishedPulling="2026-01-27 18:37:31.320501369 +0000 UTC m=+6026.419474918" observedRunningTime="2026-01-27 18:37:32.747465768 +0000 UTC m=+6027.846439337" watchObservedRunningTime="2026-01-27 18:37:32.753691686 +0000 UTC m=+6027.852665905" Jan 27 18:37:33 crc kubenswrapper[5049]: I0127 18:37:33.933032 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzscp" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" probeResult="failure" output=< Jan 27 18:37:33 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:37:33 crc kubenswrapper[5049]: > Jan 27 18:37:35 crc kubenswrapper[5049]: I0127 18:37:35.249336 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:35 crc kubenswrapper[5049]: I0127 18:37:35.249710 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:35 crc kubenswrapper[5049]: I0127 18:37:35.365525 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:41 crc kubenswrapper[5049]: I0127 18:37:41.796486 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-79lw4"] Jan 27 18:37:41 crc kubenswrapper[5049]: I0127 18:37:41.797183 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-79lw4" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" containerName="octavia-amphora-httpd" containerID="cri-o://87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e" gracePeriod=30 Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.367985 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.485639 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-httpd-config\") pod \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\" (UID: \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\") " Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.485867 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-amphora-image\") pod \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\" (UID: \"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b\") " Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.511833 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" (UID: "ef1951d8-2ddb-4aa6-933b-ac010f6cf14b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.542580 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" (UID: "ef1951d8-2ddb-4aa6-933b-ac010f6cf14b"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.588107 5049 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-amphora-image\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.588149 5049 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.827736 5049 generic.go:334] "Generic (PLEG): container finished" podID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" containerID="87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e" exitCode=0 Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.827784 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-79lw4" event={"ID":"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b","Type":"ContainerDied","Data":"87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e"} Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.827843 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-79lw4" event={"ID":"ef1951d8-2ddb-4aa6-933b-ac010f6cf14b","Type":"ContainerDied","Data":"62e3b1f9fd037647f9497ac2f4012e740b9f721baeeb999be3b2ac76d61e07f3"} Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.827862 5049 scope.go:117] "RemoveContainer" containerID="87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.829115 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-79lw4" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.860968 5049 scope.go:117] "RemoveContainer" containerID="29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.863959 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-79lw4"] Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.877173 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-79lw4"] Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.912828 5049 scope.go:117] "RemoveContainer" containerID="87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e" Jan 27 18:37:42 crc kubenswrapper[5049]: E0127 18:37:42.913346 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e\": container with ID starting with 87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e not found: ID does not exist" containerID="87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.913377 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e"} err="failed to get container status \"87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e\": rpc error: code = NotFound desc = could not find container \"87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e\": container with ID starting with 87c85d0e67f436de5bfb18067a5be2931e18239226d729dca3d58157fbc4846e not found: ID does not exist" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.913396 5049 scope.go:117] "RemoveContainer" containerID="29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93" Jan 27 18:37:42 crc kubenswrapper[5049]: E0127 18:37:42.914339 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93\": container with ID starting with 29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93 not found: ID does not exist" containerID="29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93" Jan 27 18:37:42 crc kubenswrapper[5049]: I0127 18:37:42.914363 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93"} err="failed to get container status \"29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93\": rpc error: code = NotFound desc = could not find container \"29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93\": container with ID starting with 29855d159475ad7d2ad672fc1fd4b69ce057d5b8eec183b11b502dc4ede92d93 not found: ID does not exist" Jan 27 18:37:43 crc kubenswrapper[5049]: I0127 18:37:43.663111 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" path="/var/lib/kubelet/pods/ef1951d8-2ddb-4aa6-933b-ac010f6cf14b/volumes" Jan 27 18:37:43 crc kubenswrapper[5049]: I0127 18:37:43.924149 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tzscp" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" probeResult="failure" output=< Jan 27 18:37:43 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 18:37:43 crc kubenswrapper[5049]: > Jan 27 18:37:44 crc kubenswrapper[5049]: I0127 18:37:44.646224 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:37:44 crc kubenswrapper[5049]: E0127 18:37:44.647023 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.351564 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.427467 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxb8s"] Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.756928 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-ms28h"] Jan 27 18:37:45 crc kubenswrapper[5049]: E0127 18:37:45.757404 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" containerName="octavia-amphora-httpd" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.757425 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" containerName="octavia-amphora-httpd" Jan 27 18:37:45 crc kubenswrapper[5049]: E0127 18:37:45.757446 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" containerName="init" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.757456 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" containerName="init" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.757693 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1951d8-2ddb-4aa6-933b-ac010f6cf14b" containerName="octavia-amphora-httpd" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.759094 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-ms28h" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.766184 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-ms28h"] Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.768186 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.847307 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3c9f2e6f-2581-4298-afc7-64c5424ebd56-httpd-config\") pod \"octavia-image-upload-59f8cff499-ms28h\" (UID: \"3c9f2e6f-2581-4298-afc7-64c5424ebd56\") " pod="openstack/octavia-image-upload-59f8cff499-ms28h" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.847445 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3c9f2e6f-2581-4298-afc7-64c5424ebd56-amphora-image\") pod \"octavia-image-upload-59f8cff499-ms28h\" (UID: \"3c9f2e6f-2581-4298-afc7-64c5424ebd56\") " pod="openstack/octavia-image-upload-59f8cff499-ms28h" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.853638 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dxb8s" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerName="registry-server" containerID="cri-o://02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9" gracePeriod=2 Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.949247 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3c9f2e6f-2581-4298-afc7-64c5424ebd56-httpd-config\") pod \"octavia-image-upload-59f8cff499-ms28h\" (UID: \"3c9f2e6f-2581-4298-afc7-64c5424ebd56\") " pod="openstack/octavia-image-upload-59f8cff499-ms28h" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.949365 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3c9f2e6f-2581-4298-afc7-64c5424ebd56-amphora-image\") pod \"octavia-image-upload-59f8cff499-ms28h\" (UID: \"3c9f2e6f-2581-4298-afc7-64c5424ebd56\") " pod="openstack/octavia-image-upload-59f8cff499-ms28h" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.949914 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/3c9f2e6f-2581-4298-afc7-64c5424ebd56-amphora-image\") pod \"octavia-image-upload-59f8cff499-ms28h\" (UID: \"3c9f2e6f-2581-4298-afc7-64c5424ebd56\") " pod="openstack/octavia-image-upload-59f8cff499-ms28h" Jan 27 18:37:45 crc kubenswrapper[5049]: I0127 18:37:45.962718 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3c9f2e6f-2581-4298-afc7-64c5424ebd56-httpd-config\") pod \"octavia-image-upload-59f8cff499-ms28h\" (UID: \"3c9f2e6f-2581-4298-afc7-64c5424ebd56\") " pod="openstack/octavia-image-upload-59f8cff499-ms28h" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.091406 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-ms28h" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.455020 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.570411 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-utilities\") pod \"920d2e54-21f8-49b4-b677-9bdd8e442323\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.570490 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbzh7\" (UniqueName: \"kubernetes.io/projected/920d2e54-21f8-49b4-b677-9bdd8e442323-kube-api-access-xbzh7\") pod \"920d2e54-21f8-49b4-b677-9bdd8e442323\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.570657 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-catalog-content\") pod \"920d2e54-21f8-49b4-b677-9bdd8e442323\" (UID: \"920d2e54-21f8-49b4-b677-9bdd8e442323\") " Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.571570 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-utilities" (OuterVolumeSpecName: "utilities") pod "920d2e54-21f8-49b4-b677-9bdd8e442323" (UID: "920d2e54-21f8-49b4-b677-9bdd8e442323"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.575301 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920d2e54-21f8-49b4-b677-9bdd8e442323-kube-api-access-xbzh7" (OuterVolumeSpecName: "kube-api-access-xbzh7") pod "920d2e54-21f8-49b4-b677-9bdd8e442323" (UID: "920d2e54-21f8-49b4-b677-9bdd8e442323"). InnerVolumeSpecName "kube-api-access-xbzh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.591688 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "920d2e54-21f8-49b4-b677-9bdd8e442323" (UID: "920d2e54-21f8-49b4-b677-9bdd8e442323"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.678848 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.678894 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbzh7\" (UniqueName: \"kubernetes.io/projected/920d2e54-21f8-49b4-b677-9bdd8e442323-kube-api-access-xbzh7\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.678909 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/920d2e54-21f8-49b4-b677-9bdd8e442323-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.790025 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-ms28h"] Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.865269 5049 generic.go:334] "Generic (PLEG): container finished" podID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerID="02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9" exitCode=0 Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.865311 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxb8s" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.865329 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxb8s" event={"ID":"920d2e54-21f8-49b4-b677-9bdd8e442323","Type":"ContainerDied","Data":"02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9"} Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.865783 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxb8s" event={"ID":"920d2e54-21f8-49b4-b677-9bdd8e442323","Type":"ContainerDied","Data":"76f38473fede700751eb33349aeacd706605a15fe2a3dd82a3f45c0891171983"} Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.865804 5049 scope.go:117] "RemoveContainer" containerID="02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.870573 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-ms28h" event={"ID":"3c9f2e6f-2581-4298-afc7-64c5424ebd56","Type":"ContainerStarted","Data":"3389114e1ecc56613b20385af34f853de27fb075e9307c5d7a894f57824883b4"} Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.918040 5049 scope.go:117] "RemoveContainer" containerID="3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.923795 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxb8s"] Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.932646 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxb8s"] Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.947821 5049 scope.go:117] "RemoveContainer" containerID="1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.968994 5049 scope.go:117] "RemoveContainer" containerID="02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9" Jan 27 18:37:46 crc kubenswrapper[5049]: E0127 18:37:46.984857 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9\": container with ID starting with 02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9 not found: ID does not exist" containerID="02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.984918 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9"} err="failed to get container status \"02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9\": rpc error: code = NotFound desc = could not find container \"02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9\": container with ID starting with 02d0dd64564a4352890a003e509e3618242ea6df2008f5a6150f4b232e9ff9d9 not found: ID does not exist" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.984954 5049 scope.go:117] "RemoveContainer" containerID="3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83" Jan 27 18:37:46 crc kubenswrapper[5049]: E0127 18:37:46.985462 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83\": container with ID starting with 3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83 not found: ID does not exist" containerID="3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.985493 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83"} err="failed to get container status \"3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83\": rpc error: code = NotFound desc = could not find container \"3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83\": container with ID starting with 3d322512a2ef21b62a647d9ac5557aae50121b2219928c0b63fa05d74fe75b83 not found: ID does not exist" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.985512 5049 scope.go:117] "RemoveContainer" containerID="1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63" Jan 27 18:37:46 crc kubenswrapper[5049]: E0127 18:37:46.985807 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63\": container with ID starting with 1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63 not found: ID does not exist" containerID="1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63" Jan 27 18:37:46 crc kubenswrapper[5049]: I0127 18:37:46.985841 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63"} err="failed to get container status \"1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63\": rpc error: code = NotFound desc = could not find container \"1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63\": container with ID starting with 1859bfbc97bb885f1971fb9f05c271af6e8858ac27d9009ac10cf1590a703d63 not found: ID does not exist" Jan 27 18:37:47 crc kubenswrapper[5049]: I0127 18:37:47.658353 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" path="/var/lib/kubelet/pods/920d2e54-21f8-49b4-b677-9bdd8e442323/volumes" Jan 27 18:37:47 crc kubenswrapper[5049]: I0127 18:37:47.884024 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-ms28h" event={"ID":"3c9f2e6f-2581-4298-afc7-64c5424ebd56","Type":"ContainerStarted","Data":"91ba5ae3c06684a3f398c38f71403ad29b3ba51a73296ae6a148806bddaf6c70"} Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.609191 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-nhbx4"] Jan 27 18:37:48 crc kubenswrapper[5049]: E0127 18:37:48.609981 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerName="extract-content" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.610004 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerName="extract-content" Jan 27 18:37:48 crc kubenswrapper[5049]: E0127 18:37:48.610019 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerName="registry-server" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.610027 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerName="registry-server" Jan 27 18:37:48 crc kubenswrapper[5049]: E0127 18:37:48.610049 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerName="extract-utilities" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.610058 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerName="extract-utilities" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.610282 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="920d2e54-21f8-49b4-b677-9bdd8e442323" containerName="registry-server" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.611526 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.617709 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.618150 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.621464 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.626441 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-nhbx4"] Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.725019 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7f696337-d60b-4e22-be1f-a5bb48af356b-config-data-merged\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.725137 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-combined-ca-bundle\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.725203 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-config-data\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.725229 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7f696337-d60b-4e22-be1f-a5bb48af356b-hm-ports\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.725274 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-scripts\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.725305 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-amphora-certs\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.826612 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-scripts\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.826658 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-amphora-certs\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.826760 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7f696337-d60b-4e22-be1f-a5bb48af356b-config-data-merged\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.826830 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-combined-ca-bundle\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.826880 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-config-data\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.826896 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7f696337-d60b-4e22-be1f-a5bb48af356b-hm-ports\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.827372 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7f696337-d60b-4e22-be1f-a5bb48af356b-config-data-merged\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.827859 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/7f696337-d60b-4e22-be1f-a5bb48af356b-hm-ports\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.832113 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-scripts\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.840632 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-combined-ca-bundle\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.840810 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-amphora-certs\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.841231 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f696337-d60b-4e22-be1f-a5bb48af356b-config-data\") pod \"octavia-healthmanager-nhbx4\" (UID: \"7f696337-d60b-4e22-be1f-a5bb48af356b\") " pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.898558 5049 generic.go:334] "Generic (PLEG): container finished" podID="3c9f2e6f-2581-4298-afc7-64c5424ebd56" containerID="91ba5ae3c06684a3f398c38f71403ad29b3ba51a73296ae6a148806bddaf6c70" exitCode=0 Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.898612 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-ms28h" event={"ID":"3c9f2e6f-2581-4298-afc7-64c5424ebd56","Type":"ContainerDied","Data":"91ba5ae3c06684a3f398c38f71403ad29b3ba51a73296ae6a148806bddaf6c70"} Jan 27 18:37:48 crc kubenswrapper[5049]: I0127 18:37:48.933663 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:49 crc kubenswrapper[5049]: I0127 18:37:49.811559 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-nhbx4"] Jan 27 18:37:49 crc kubenswrapper[5049]: W0127 18:37:49.835031 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f696337_d60b_4e22_be1f_a5bb48af356b.slice/crio-8c9aadffd8d7f9a4fbefe827d6f4ea1f84c6159592626c53edf7b21490eb7df4 WatchSource:0}: Error finding container 8c9aadffd8d7f9a4fbefe827d6f4ea1f84c6159592626c53edf7b21490eb7df4: Status 404 returned error can't find the container with id 8c9aadffd8d7f9a4fbefe827d6f4ea1f84c6159592626c53edf7b21490eb7df4 Jan 27 18:37:49 crc kubenswrapper[5049]: I0127 18:37:49.909125 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-nhbx4" event={"ID":"7f696337-d60b-4e22-be1f-a5bb48af356b","Type":"ContainerStarted","Data":"8c9aadffd8d7f9a4fbefe827d6f4ea1f84c6159592626c53edf7b21490eb7df4"} Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.155643 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-l6mbc"] Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.157353 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.159564 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.160809 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.167815 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-l6mbc"] Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.266328 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/564233a0-017c-4e34-8fd6-0102fd4063c0-config-data-merged\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.266417 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-scripts\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.266568 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-amphora-certs\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.266726 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-config-data\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.266828 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-combined-ca-bundle\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.266893 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/564233a0-017c-4e34-8fd6-0102fd4063c0-hm-ports\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.368824 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-combined-ca-bundle\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.369862 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/564233a0-017c-4e34-8fd6-0102fd4063c0-hm-ports\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.371154 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/564233a0-017c-4e34-8fd6-0102fd4063c0-config-data-merged\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.371653 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-scripts\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.371877 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-amphora-certs\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.372060 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-config-data\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.371560 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/564233a0-017c-4e34-8fd6-0102fd4063c0-config-data-merged\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.370948 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/564233a0-017c-4e34-8fd6-0102fd4063c0-hm-ports\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.373089 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-combined-ca-bundle\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.375606 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-amphora-certs\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.376905 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-config-data\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.383925 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564233a0-017c-4e34-8fd6-0102fd4063c0-scripts\") pod \"octavia-housekeeping-l6mbc\" (UID: \"564233a0-017c-4e34-8fd6-0102fd4063c0\") " pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.481178 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.918808 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-ms28h" event={"ID":"3c9f2e6f-2581-4298-afc7-64c5424ebd56","Type":"ContainerStarted","Data":"ae6ffc6d20b0afc39d949159cbfe7dd5947f2a0b25a686bcf556fb020009ad33"} Jan 27 18:37:50 crc kubenswrapper[5049]: I0127 18:37:50.920223 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-nhbx4" event={"ID":"7f696337-d60b-4e22-be1f-a5bb48af356b","Type":"ContainerStarted","Data":"8764c56721016bf94243abb64c9eca81163c2d6f2f2d2d8becc834f7aa521299"} Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.002232 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-ms28h" podStartSLOduration=2.526218301 podStartE2EDuration="6.002206302s" podCreationTimestamp="2026-01-27 18:37:45 +0000 UTC" firstStartedPulling="2026-01-27 18:37:46.794537124 +0000 UTC m=+6041.893510673" lastFinishedPulling="2026-01-27 18:37:50.270525125 +0000 UTC m=+6045.369498674" observedRunningTime="2026-01-27 18:37:50.947553464 +0000 UTC m=+6046.046527033" watchObservedRunningTime="2026-01-27 18:37:51.002206302 +0000 UTC m=+6046.101179861" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.062183 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-l6mbc"] Jan 27 18:37:51 crc kubenswrapper[5049]: W0127 18:37:51.095825 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod564233a0_017c_4e34_8fd6_0102fd4063c0.slice/crio-c53697aeb708266b8719bb2be2a3ace72632b7c1d3f74861568a346689c7b18c WatchSource:0}: Error finding container c53697aeb708266b8719bb2be2a3ace72632b7c1d3f74861568a346689c7b18c: Status 404 returned error can't find the container with id c53697aeb708266b8719bb2be2a3ace72632b7c1d3f74861568a346689c7b18c Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.588049 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-9gnt2"] Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.590039 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.592338 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.595745 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.604407 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-9gnt2"] Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.695784 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-amphora-certs\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.695850 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-combined-ca-bundle\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.695887 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-config-data\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.695954 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fa152fbd-d248-415c-a6bd-e97978297a6d-config-data-merged\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.696319 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/fa152fbd-d248-415c-a6bd-e97978297a6d-hm-ports\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.696582 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-scripts\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.798698 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-combined-ca-bundle\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.798783 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-config-data\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.799174 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fa152fbd-d248-415c-a6bd-e97978297a6d-config-data-merged\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.799651 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/fa152fbd-d248-415c-a6bd-e97978297a6d-config-data-merged\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.799875 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/fa152fbd-d248-415c-a6bd-e97978297a6d-hm-ports\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.800095 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-scripts\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.800197 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-amphora-certs\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.800970 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/fa152fbd-d248-415c-a6bd-e97978297a6d-hm-ports\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.805050 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-combined-ca-bundle\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.806774 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-config-data\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.813737 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-scripts\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.814306 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/fa152fbd-d248-415c-a6bd-e97978297a6d-amphora-certs\") pod \"octavia-worker-9gnt2\" (UID: \"fa152fbd-d248-415c-a6bd-e97978297a6d\") " pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.917245 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-9gnt2" Jan 27 18:37:51 crc kubenswrapper[5049]: I0127 18:37:51.932763 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-l6mbc" event={"ID":"564233a0-017c-4e34-8fd6-0102fd4063c0","Type":"ContainerStarted","Data":"c53697aeb708266b8719bb2be2a3ace72632b7c1d3f74861568a346689c7b18c"} Jan 27 18:37:52 crc kubenswrapper[5049]: I0127 18:37:52.523480 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-9gnt2"] Jan 27 18:37:52 crc kubenswrapper[5049]: W0127 18:37:52.660586 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa152fbd_d248_415c_a6bd_e97978297a6d.slice/crio-b76a2f3da707e5a33fb0b489267260439c89af4a07abb54e8874c16e59cd9d56 WatchSource:0}: Error finding container b76a2f3da707e5a33fb0b489267260439c89af4a07abb54e8874c16e59cd9d56: Status 404 returned error can't find the container with id b76a2f3da707e5a33fb0b489267260439c89af4a07abb54e8874c16e59cd9d56 Jan 27 18:37:52 crc kubenswrapper[5049]: I0127 18:37:52.930182 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:37:52 crc kubenswrapper[5049]: I0127 18:37:52.949505 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-9gnt2" event={"ID":"fa152fbd-d248-415c-a6bd-e97978297a6d","Type":"ContainerStarted","Data":"b76a2f3da707e5a33fb0b489267260439c89af4a07abb54e8874c16e59cd9d56"} Jan 27 18:37:52 crc kubenswrapper[5049]: I0127 18:37:52.951926 5049 generic.go:334] "Generic (PLEG): container finished" podID="7f696337-d60b-4e22-be1f-a5bb48af356b" containerID="8764c56721016bf94243abb64c9eca81163c2d6f2f2d2d8becc834f7aa521299" exitCode=0 Jan 27 18:37:52 crc kubenswrapper[5049]: I0127 18:37:52.952040 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-nhbx4" event={"ID":"7f696337-d60b-4e22-be1f-a5bb48af356b","Type":"ContainerDied","Data":"8764c56721016bf94243abb64c9eca81163c2d6f2f2d2d8becc834f7aa521299"} Jan 27 18:37:53 crc kubenswrapper[5049]: I0127 18:37:53.024668 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:37:53 crc kubenswrapper[5049]: I0127 18:37:53.166939 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzscp"] Jan 27 18:37:53 crc kubenswrapper[5049]: I0127 18:37:53.403429 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-nhbx4"] Jan 27 18:37:53 crc kubenswrapper[5049]: I0127 18:37:53.961483 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tzscp" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" containerID="cri-o://e220e0769f58124551f152d3cb68803a93177b66a1b6cf6a9db5171977f06516" gracePeriod=2 Jan 27 18:37:54 crc kubenswrapper[5049]: I0127 18:37:54.972393 5049 generic.go:334] "Generic (PLEG): container finished" podID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerID="e220e0769f58124551f152d3cb68803a93177b66a1b6cf6a9db5171977f06516" exitCode=0 Jan 27 18:37:54 crc kubenswrapper[5049]: I0127 18:37:54.972427 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzscp" event={"ID":"faaf8fb9-4f08-4d0a-b9c5-d516222e671d","Type":"ContainerDied","Data":"e220e0769f58124551f152d3cb68803a93177b66a1b6cf6a9db5171977f06516"} Jan 27 18:37:54 crc kubenswrapper[5049]: I0127 18:37:54.975878 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-nhbx4" event={"ID":"7f696337-d60b-4e22-be1f-a5bb48af356b","Type":"ContainerStarted","Data":"c3c9cab4403cc8ad302ecb0b11a3d0b8644f32e4d529e446fc797c230e9e4ad0"} Jan 27 18:37:54 crc kubenswrapper[5049]: I0127 18:37:54.976060 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:37:54 crc kubenswrapper[5049]: I0127 18:37:54.977571 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-l6mbc" event={"ID":"564233a0-017c-4e34-8fd6-0102fd4063c0","Type":"ContainerStarted","Data":"b9b4e4286379c69d859da6bff3a7e3c1e8371c4d331417bb345a94e1b2f07183"} Jan 27 18:37:55 crc kubenswrapper[5049]: I0127 18:37:55.005098 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-nhbx4" podStartSLOduration=7.005076235 podStartE2EDuration="7.005076235s" podCreationTimestamp="2026-01-27 18:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 18:37:54.997779306 +0000 UTC m=+6050.096752875" watchObservedRunningTime="2026-01-27 18:37:55.005076235 +0000 UTC m=+6050.104049784" Jan 27 18:37:55 crc kubenswrapper[5049]: I0127 18:37:55.655252 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:37:55 crc kubenswrapper[5049]: E0127 18:37:55.656066 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.337150 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.385869 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-842tn\" (UniqueName: \"kubernetes.io/projected/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-kube-api-access-842tn\") pod \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.386089 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-catalog-content\") pod \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.386119 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-utilities\") pod \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\" (UID: \"faaf8fb9-4f08-4d0a-b9c5-d516222e671d\") " Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.387590 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-utilities" (OuterVolumeSpecName: "utilities") pod "faaf8fb9-4f08-4d0a-b9c5-d516222e671d" (UID: "faaf8fb9-4f08-4d0a-b9c5-d516222e671d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.413639 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-kube-api-access-842tn" (OuterVolumeSpecName: "kube-api-access-842tn") pod "faaf8fb9-4f08-4d0a-b9c5-d516222e671d" (UID: "faaf8fb9-4f08-4d0a-b9c5-d516222e671d"). InnerVolumeSpecName "kube-api-access-842tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.489427 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-842tn\" (UniqueName: \"kubernetes.io/projected/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-kube-api-access-842tn\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.489464 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.538901 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faaf8fb9-4f08-4d0a-b9c5-d516222e671d" (UID: "faaf8fb9-4f08-4d0a-b9c5-d516222e671d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.591067 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faaf8fb9-4f08-4d0a-b9c5-d516222e671d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:37:56 crc kubenswrapper[5049]: I0127 18:37:56.999765 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tzscp" event={"ID":"faaf8fb9-4f08-4d0a-b9c5-d516222e671d","Type":"ContainerDied","Data":"bd45f80ad48e12dadd14ca153a93ee95dbe031c100ddfb3be3afc3558fcd963d"} Jan 27 18:37:57 crc kubenswrapper[5049]: I0127 18:37:56.999829 5049 scope.go:117] "RemoveContainer" containerID="e220e0769f58124551f152d3cb68803a93177b66a1b6cf6a9db5171977f06516" Jan 27 18:37:57 crc kubenswrapper[5049]: I0127 18:37:56.999846 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tzscp" Jan 27 18:37:57 crc kubenswrapper[5049]: I0127 18:37:57.034587 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tzscp"] Jan 27 18:37:57 crc kubenswrapper[5049]: I0127 18:37:57.046721 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tzscp"] Jan 27 18:37:57 crc kubenswrapper[5049]: I0127 18:37:57.678536 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" path="/var/lib/kubelet/pods/faaf8fb9-4f08-4d0a-b9c5-d516222e671d/volumes" Jan 27 18:37:58 crc kubenswrapper[5049]: I0127 18:37:58.973877 5049 scope.go:117] "RemoveContainer" containerID="f9b0901be974dcab6b301499a1923f495417102f307e5bbd63e0e188a958d900" Jan 27 18:37:59 crc kubenswrapper[5049]: I0127 18:37:59.035867 5049 scope.go:117] "RemoveContainer" containerID="55f782068cb53a54f13163cd399cf9a6079509b4078d72cb772c24f93728009f" Jan 27 18:38:00 crc kubenswrapper[5049]: I0127 18:38:00.035056 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-9gnt2" event={"ID":"fa152fbd-d248-415c-a6bd-e97978297a6d","Type":"ContainerStarted","Data":"8582a869742fecdcb192c719391424fd993cfd8f13f4ad5eac4804cbb48b24c3"} Jan 27 18:38:00 crc kubenswrapper[5049]: I0127 18:38:00.037210 5049 generic.go:334] "Generic (PLEG): container finished" podID="564233a0-017c-4e34-8fd6-0102fd4063c0" containerID="b9b4e4286379c69d859da6bff3a7e3c1e8371c4d331417bb345a94e1b2f07183" exitCode=0 Jan 27 18:38:00 crc kubenswrapper[5049]: I0127 18:38:00.037247 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-l6mbc" event={"ID":"564233a0-017c-4e34-8fd6-0102fd4063c0","Type":"ContainerDied","Data":"b9b4e4286379c69d859da6bff3a7e3c1e8371c4d331417bb345a94e1b2f07183"} Jan 27 18:38:01 crc kubenswrapper[5049]: I0127 18:38:01.051324 5049 generic.go:334] "Generic (PLEG): container finished" podID="fa152fbd-d248-415c-a6bd-e97978297a6d" containerID="8582a869742fecdcb192c719391424fd993cfd8f13f4ad5eac4804cbb48b24c3" exitCode=0 Jan 27 18:38:01 crc kubenswrapper[5049]: I0127 18:38:01.051451 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-9gnt2" event={"ID":"fa152fbd-d248-415c-a6bd-e97978297a6d","Type":"ContainerDied","Data":"8582a869742fecdcb192c719391424fd993cfd8f13f4ad5eac4804cbb48b24c3"} Jan 27 18:38:01 crc kubenswrapper[5049]: I0127 18:38:01.057802 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-l6mbc" event={"ID":"564233a0-017c-4e34-8fd6-0102fd4063c0","Type":"ContainerStarted","Data":"f0fd8a7d05a7d3e5569452dac51d678337ceca2107b406a60381c00c076b60c8"} Jan 27 18:38:01 crc kubenswrapper[5049]: I0127 18:38:01.058075 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:38:01 crc kubenswrapper[5049]: I0127 18:38:01.108661 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-l6mbc" podStartSLOduration=9.501211165 podStartE2EDuration="11.108641541s" podCreationTimestamp="2026-01-27 18:37:50 +0000 UTC" firstStartedPulling="2026-01-27 18:37:51.104471295 +0000 UTC m=+6046.203444844" lastFinishedPulling="2026-01-27 18:37:52.711901671 +0000 UTC m=+6047.810875220" observedRunningTime="2026-01-27 18:38:01.10059449 +0000 UTC m=+6056.199568049" watchObservedRunningTime="2026-01-27 18:38:01.108641541 +0000 UTC m=+6056.207615090" Jan 27 18:38:02 crc kubenswrapper[5049]: I0127 18:38:02.068474 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-9gnt2" event={"ID":"fa152fbd-d248-415c-a6bd-e97978297a6d","Type":"ContainerStarted","Data":"c53ac9af2fafcca4b82bab62daf629ab8728579481572067bef95edcae5879a8"} Jan 27 18:38:02 crc kubenswrapper[5049]: I0127 18:38:02.090828 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-9gnt2" podStartSLOduration=4.47526846 podStartE2EDuration="11.090803101s" podCreationTimestamp="2026-01-27 18:37:51 +0000 UTC" firstStartedPulling="2026-01-27 18:37:52.67142165 +0000 UTC m=+6047.770395209" lastFinishedPulling="2026-01-27 18:37:59.286956291 +0000 UTC m=+6054.385929850" observedRunningTime="2026-01-27 18:38:02.083421039 +0000 UTC m=+6057.182394598" watchObservedRunningTime="2026-01-27 18:38:02.090803101 +0000 UTC m=+6057.189776650" Jan 27 18:38:03 crc kubenswrapper[5049]: I0127 18:38:03.078231 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-9gnt2" Jan 27 18:38:03 crc kubenswrapper[5049]: I0127 18:38:03.964480 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-nhbx4" Jan 27 18:38:05 crc kubenswrapper[5049]: I0127 18:38:05.510724 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-l6mbc" Jan 27 18:38:06 crc kubenswrapper[5049]: I0127 18:38:06.947119 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-9gnt2" Jan 27 18:38:08 crc kubenswrapper[5049]: I0127 18:38:08.646165 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:38:08 crc kubenswrapper[5049]: E0127 18:38:08.646602 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:38:21 crc kubenswrapper[5049]: I0127 18:38:21.646586 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:38:21 crc kubenswrapper[5049]: E0127 18:38:21.648062 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:38:32 crc kubenswrapper[5049]: I0127 18:38:32.646655 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:38:32 crc kubenswrapper[5049]: E0127 18:38:32.647700 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:38:38 crc kubenswrapper[5049]: I0127 18:38:38.049202 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8xwg6"] Jan 27 18:38:38 crc kubenswrapper[5049]: I0127 18:38:38.082910 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-abd8-account-create-update-nxr56"] Jan 27 18:38:38 crc kubenswrapper[5049]: I0127 18:38:38.092833 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-abd8-account-create-update-nxr56"] Jan 27 18:38:38 crc kubenswrapper[5049]: I0127 18:38:38.104415 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8xwg6"] Jan 27 18:38:39 crc kubenswrapper[5049]: I0127 18:38:39.658146 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6072ab49-faed-4bfd-a9f1-7c2bf042e5a6" path="/var/lib/kubelet/pods/6072ab49-faed-4bfd-a9f1-7c2bf042e5a6/volumes" Jan 27 18:38:39 crc kubenswrapper[5049]: I0127 18:38:39.659843 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24494b5-89ad-44dc-a138-69241e3c1e5b" path="/var/lib/kubelet/pods/b24494b5-89ad-44dc-a138-69241e3c1e5b/volumes" Jan 27 18:38:43 crc kubenswrapper[5049]: I0127 18:38:43.645659 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:38:43 crc kubenswrapper[5049]: E0127 18:38:43.646504 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:38:44 crc kubenswrapper[5049]: I0127 18:38:44.028895 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2cgdj"] Jan 27 18:38:44 crc kubenswrapper[5049]: I0127 18:38:44.038235 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2cgdj"] Jan 27 18:38:45 crc kubenswrapper[5049]: I0127 18:38:45.659950 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6cce11c-d752-4e6a-8df3-d4262505bb1f" path="/var/lib/kubelet/pods/c6cce11c-d752-4e6a-8df3-d4262505bb1f/volumes" Jan 27 18:38:58 crc kubenswrapper[5049]: I0127 18:38:58.645841 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:38:58 crc kubenswrapper[5049]: E0127 18:38:58.646586 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:39:11 crc kubenswrapper[5049]: I0127 18:39:11.071165 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9a3e-account-create-update-zdnpx"] Jan 27 18:39:11 crc kubenswrapper[5049]: I0127 18:39:11.084129 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9a3e-account-create-update-zdnpx"] Jan 27 18:39:11 crc kubenswrapper[5049]: I0127 18:39:11.646576 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:39:11 crc kubenswrapper[5049]: E0127 18:39:11.646824 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:39:11 crc kubenswrapper[5049]: I0127 18:39:11.673538 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcea57ac-4299-451d-b94c-e0fe7457439b" path="/var/lib/kubelet/pods/bcea57ac-4299-451d-b94c-e0fe7457439b/volumes" Jan 27 18:39:12 crc kubenswrapper[5049]: I0127 18:39:12.027861 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rs229"] Jan 27 18:39:12 crc kubenswrapper[5049]: I0127 18:39:12.039012 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rs229"] Jan 27 18:39:13 crc kubenswrapper[5049]: I0127 18:39:13.658573 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c06805d-2324-478a-97a3-c8b6bbaf12f5" path="/var/lib/kubelet/pods/4c06805d-2324-478a-97a3-c8b6bbaf12f5/volumes" Jan 27 18:39:17 crc kubenswrapper[5049]: I0127 18:39:17.763030 5049 scope.go:117] "RemoveContainer" containerID="330af43bb38f6e7342cddde186c012e2d3f83d87f155f20e4bb457cac0424393" Jan 27 18:39:17 crc kubenswrapper[5049]: I0127 18:39:17.795616 5049 scope.go:117] "RemoveContainer" containerID="ab3a9aadfb96094dc76c2e79d46bbb51f0a3c40eacbdc5eb55d13623977c7b46" Jan 27 18:39:17 crc kubenswrapper[5049]: I0127 18:39:17.855181 5049 scope.go:117] "RemoveContainer" containerID="a6801024e17d24343d267d28c967cd5bcbe0d22d5c4282d1fb0670a7b4a86677" Jan 27 18:39:17 crc kubenswrapper[5049]: I0127 18:39:17.901321 5049 scope.go:117] "RemoveContainer" containerID="316a0979ed5d061a46353c9f222a71986b0695bdef2e05a687e5574b2c68ffc4" Jan 27 18:39:17 crc kubenswrapper[5049]: I0127 18:39:17.958647 5049 scope.go:117] "RemoveContainer" containerID="4798ec7b8ecc5358270429c899f0c1d5095ab9670af65023065e093a09345f29" Jan 27 18:39:20 crc kubenswrapper[5049]: I0127 18:39:20.054216 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hjzb7"] Jan 27 18:39:20 crc kubenswrapper[5049]: I0127 18:39:20.069647 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hjzb7"] Jan 27 18:39:21 crc kubenswrapper[5049]: I0127 18:39:21.667191 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7085d906-0bf7-4167-9c11-7d2468761de9" path="/var/lib/kubelet/pods/7085d906-0bf7-4167-9c11-7d2468761de9/volumes" Jan 27 18:39:25 crc kubenswrapper[5049]: I0127 18:39:25.661828 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:39:25 crc kubenswrapper[5049]: E0127 18:39:25.665035 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:39:36 crc kubenswrapper[5049]: I0127 18:39:36.646424 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:39:36 crc kubenswrapper[5049]: E0127 18:39:36.647214 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.450044 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljfwz"] Jan 27 18:39:37 crc kubenswrapper[5049]: E0127 18:39:37.450643 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="extract-content" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.450661 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="extract-content" Jan 27 18:39:37 crc kubenswrapper[5049]: E0127 18:39:37.450674 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.450680 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" Jan 27 18:39:37 crc kubenswrapper[5049]: E0127 18:39:37.450716 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="extract-utilities" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.450724 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="extract-utilities" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.450908 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="faaf8fb9-4f08-4d0a-b9c5-d516222e671d" containerName="registry-server" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.452200 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.465359 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljfwz"] Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.541831 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-utilities\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.541912 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-catalog-content\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.541960 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqc7q\" (UniqueName: \"kubernetes.io/projected/6d735449-e8de-4d1d-b259-3b6710d9d08a-kube-api-access-pqc7q\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.644146 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-catalog-content\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.644246 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqc7q\" (UniqueName: \"kubernetes.io/projected/6d735449-e8de-4d1d-b259-3b6710d9d08a-kube-api-access-pqc7q\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.644413 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-utilities\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.645358 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-catalog-content\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.645917 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-utilities\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.674016 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqc7q\" (UniqueName: \"kubernetes.io/projected/6d735449-e8de-4d1d-b259-3b6710d9d08a-kube-api-access-pqc7q\") pod \"community-operators-ljfwz\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:37 crc kubenswrapper[5049]: I0127 18:39:37.771346 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:38 crc kubenswrapper[5049]: I0127 18:39:38.353333 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljfwz"] Jan 27 18:39:39 crc kubenswrapper[5049]: I0127 18:39:39.046450 5049 generic.go:334] "Generic (PLEG): container finished" podID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerID="4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a" exitCode=0 Jan 27 18:39:39 crc kubenswrapper[5049]: I0127 18:39:39.046554 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljfwz" event={"ID":"6d735449-e8de-4d1d-b259-3b6710d9d08a","Type":"ContainerDied","Data":"4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a"} Jan 27 18:39:39 crc kubenswrapper[5049]: I0127 18:39:39.046803 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljfwz" event={"ID":"6d735449-e8de-4d1d-b259-3b6710d9d08a","Type":"ContainerStarted","Data":"ec1f5aceb1a861c2a0e3905d5b720030602bf80251492b12338e537e9da152b5"} Jan 27 18:39:39 crc kubenswrapper[5049]: I0127 18:39:39.049009 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:39:40 crc kubenswrapper[5049]: I0127 18:39:40.057287 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljfwz" event={"ID":"6d735449-e8de-4d1d-b259-3b6710d9d08a","Type":"ContainerStarted","Data":"4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409"} Jan 27 18:39:42 crc kubenswrapper[5049]: I0127 18:39:42.079173 5049 generic.go:334] "Generic (PLEG): container finished" podID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerID="4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409" exitCode=0 Jan 27 18:39:42 crc kubenswrapper[5049]: I0127 18:39:42.079270 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljfwz" event={"ID":"6d735449-e8de-4d1d-b259-3b6710d9d08a","Type":"ContainerDied","Data":"4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409"} Jan 27 18:39:43 crc kubenswrapper[5049]: I0127 18:39:43.096175 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljfwz" event={"ID":"6d735449-e8de-4d1d-b259-3b6710d9d08a","Type":"ContainerStarted","Data":"1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764"} Jan 27 18:39:43 crc kubenswrapper[5049]: I0127 18:39:43.123668 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljfwz" podStartSLOduration=2.657793127 podStartE2EDuration="6.123647827s" podCreationTimestamp="2026-01-27 18:39:37 +0000 UTC" firstStartedPulling="2026-01-27 18:39:39.048714197 +0000 UTC m=+6154.147687746" lastFinishedPulling="2026-01-27 18:39:42.514568877 +0000 UTC m=+6157.613542446" observedRunningTime="2026-01-27 18:39:43.116931064 +0000 UTC m=+6158.215904653" watchObservedRunningTime="2026-01-27 18:39:43.123647827 +0000 UTC m=+6158.222621376" Jan 27 18:39:47 crc kubenswrapper[5049]: I0127 18:39:47.771905 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:47 crc kubenswrapper[5049]: I0127 18:39:47.772550 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:47 crc kubenswrapper[5049]: I0127 18:39:47.859080 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:48 crc kubenswrapper[5049]: I0127 18:39:48.203463 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:48 crc kubenswrapper[5049]: I0127 18:39:48.270919 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljfwz"] Jan 27 18:39:49 crc kubenswrapper[5049]: I0127 18:39:49.646267 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:39:49 crc kubenswrapper[5049]: E0127 18:39:49.646812 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.173178 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ljfwz" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerName="registry-server" containerID="cri-o://1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764" gracePeriod=2 Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.630926 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.752203 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-utilities\") pod \"6d735449-e8de-4d1d-b259-3b6710d9d08a\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.752438 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqc7q\" (UniqueName: \"kubernetes.io/projected/6d735449-e8de-4d1d-b259-3b6710d9d08a-kube-api-access-pqc7q\") pod \"6d735449-e8de-4d1d-b259-3b6710d9d08a\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.752492 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-catalog-content\") pod \"6d735449-e8de-4d1d-b259-3b6710d9d08a\" (UID: \"6d735449-e8de-4d1d-b259-3b6710d9d08a\") " Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.753052 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-utilities" (OuterVolumeSpecName: "utilities") pod "6d735449-e8de-4d1d-b259-3b6710d9d08a" (UID: "6d735449-e8de-4d1d-b259-3b6710d9d08a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.757757 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d735449-e8de-4d1d-b259-3b6710d9d08a-kube-api-access-pqc7q" (OuterVolumeSpecName: "kube-api-access-pqc7q") pod "6d735449-e8de-4d1d-b259-3b6710d9d08a" (UID: "6d735449-e8de-4d1d-b259-3b6710d9d08a"). InnerVolumeSpecName "kube-api-access-pqc7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.822848 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d735449-e8de-4d1d-b259-3b6710d9d08a" (UID: "6d735449-e8de-4d1d-b259-3b6710d9d08a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.855118 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqc7q\" (UniqueName: \"kubernetes.io/projected/6d735449-e8de-4d1d-b259-3b6710d9d08a-kube-api-access-pqc7q\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.855148 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:50 crc kubenswrapper[5049]: I0127 18:39:50.855173 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d735449-e8de-4d1d-b259-3b6710d9d08a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.185337 5049 generic.go:334] "Generic (PLEG): container finished" podID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerID="1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764" exitCode=0 Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.185384 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljfwz" event={"ID":"6d735449-e8de-4d1d-b259-3b6710d9d08a","Type":"ContainerDied","Data":"1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764"} Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.185722 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljfwz" event={"ID":"6d735449-e8de-4d1d-b259-3b6710d9d08a","Type":"ContainerDied","Data":"ec1f5aceb1a861c2a0e3905d5b720030602bf80251492b12338e537e9da152b5"} Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.185444 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljfwz" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.185758 5049 scope.go:117] "RemoveContainer" containerID="1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.223827 5049 scope.go:117] "RemoveContainer" containerID="4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.237688 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljfwz"] Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.247466 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ljfwz"] Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.263489 5049 scope.go:117] "RemoveContainer" containerID="4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.300851 5049 scope.go:117] "RemoveContainer" containerID="1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764" Jan 27 18:39:51 crc kubenswrapper[5049]: E0127 18:39:51.301319 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764\": container with ID starting with 1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764 not found: ID does not exist" containerID="1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.301378 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764"} err="failed to get container status \"1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764\": rpc error: code = NotFound desc = could not find container \"1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764\": container with ID starting with 1196e7254a43d2564e13df27af0d09a59e646fbc5f43b803f9d8bf6587a3f764 not found: ID does not exist" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.301398 5049 scope.go:117] "RemoveContainer" containerID="4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409" Jan 27 18:39:51 crc kubenswrapper[5049]: E0127 18:39:51.301935 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409\": container with ID starting with 4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409 not found: ID does not exist" containerID="4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.302001 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409"} err="failed to get container status \"4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409\": rpc error: code = NotFound desc = could not find container \"4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409\": container with ID starting with 4f9af778cacb638f5dcf2db373fd0f1432dc9d3d39c146684de94d5f6a80a409 not found: ID does not exist" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.302036 5049 scope.go:117] "RemoveContainer" containerID="4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a" Jan 27 18:39:51 crc kubenswrapper[5049]: E0127 18:39:51.302323 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a\": container with ID starting with 4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a not found: ID does not exist" containerID="4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.302349 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a"} err="failed to get container status \"4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a\": rpc error: code = NotFound desc = could not find container \"4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a\": container with ID starting with 4e00b83b76593b94a82b4a4247d65c15474adb0fd35b4f9c02f4f0611f5fd07a not found: ID does not exist" Jan 27 18:39:51 crc kubenswrapper[5049]: I0127 18:39:51.659541 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" path="/var/lib/kubelet/pods/6d735449-e8de-4d1d-b259-3b6710d9d08a/volumes" Jan 27 18:40:02 crc kubenswrapper[5049]: I0127 18:40:02.043390 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b1f2-account-create-update-6nmwv"] Jan 27 18:40:02 crc kubenswrapper[5049]: I0127 18:40:02.052419 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-bsb4r"] Jan 27 18:40:02 crc kubenswrapper[5049]: I0127 18:40:02.061985 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-bsb4r"] Jan 27 18:40:02 crc kubenswrapper[5049]: I0127 18:40:02.070295 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b1f2-account-create-update-6nmwv"] Jan 27 18:40:03 crc kubenswrapper[5049]: I0127 18:40:03.646420 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:40:03 crc kubenswrapper[5049]: E0127 18:40:03.646835 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:40:03 crc kubenswrapper[5049]: I0127 18:40:03.662017 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217ee884-d205-47b6-8b5a-38b054f72ca8" path="/var/lib/kubelet/pods/217ee884-d205-47b6-8b5a-38b054f72ca8/volumes" Jan 27 18:40:03 crc kubenswrapper[5049]: I0127 18:40:03.664276 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90a52d8-4aaf-476e-b92e-0b6b4894c134" path="/var/lib/kubelet/pods/d90a52d8-4aaf-476e-b92e-0b6b4894c134/volumes" Jan 27 18:40:11 crc kubenswrapper[5049]: I0127 18:40:11.028482 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gtjbs"] Jan 27 18:40:11 crc kubenswrapper[5049]: I0127 18:40:11.039251 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gtjbs"] Jan 27 18:40:11 crc kubenswrapper[5049]: I0127 18:40:11.663052 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16811b41-1b68-4fea-bff7-76feac374a7c" path="/var/lib/kubelet/pods/16811b41-1b68-4fea-bff7-76feac374a7c/volumes" Jan 27 18:40:14 crc kubenswrapper[5049]: I0127 18:40:14.648222 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:40:14 crc kubenswrapper[5049]: E0127 18:40:14.649419 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:40:18 crc kubenswrapper[5049]: I0127 18:40:18.098446 5049 scope.go:117] "RemoveContainer" containerID="b7485ce71ce1303384b8b16b3a9c38bef1e7b36cf626d29e13ef5523aa36d016" Jan 27 18:40:18 crc kubenswrapper[5049]: I0127 18:40:18.143946 5049 scope.go:117] "RemoveContainer" containerID="5d70e3147c9ccd684ef9de160dae4bfc7946624825106837e68894a2ba3f6d5a" Jan 27 18:40:18 crc kubenswrapper[5049]: I0127 18:40:18.204769 5049 scope.go:117] "RemoveContainer" containerID="e30ac110a09b18827a285eedcdfb3cec0c19976dd3895af7a20d798b76b73415" Jan 27 18:40:18 crc kubenswrapper[5049]: I0127 18:40:18.224214 5049 scope.go:117] "RemoveContainer" containerID="2302b002e8ea70a4222015ba8659ebd92c541aee919550efe97d842b58bd8277" Jan 27 18:40:28 crc kubenswrapper[5049]: I0127 18:40:28.647494 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:40:28 crc kubenswrapper[5049]: E0127 18:40:28.648424 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:40:40 crc kubenswrapper[5049]: I0127 18:40:40.041576 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-k9ljd"] Jan 27 18:40:40 crc kubenswrapper[5049]: I0127 18:40:40.050539 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-k9ljd"] Jan 27 18:40:41 crc kubenswrapper[5049]: I0127 18:40:41.028596 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6731-account-create-update-5jb56"] Jan 27 18:40:41 crc kubenswrapper[5049]: I0127 18:40:41.038749 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6731-account-create-update-5jb56"] Jan 27 18:40:41 crc kubenswrapper[5049]: I0127 18:40:41.666303 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af843ebd-0b01-45b3-9520-6d9375d9edee" path="/var/lib/kubelet/pods/af843ebd-0b01-45b3-9520-6d9375d9edee/volumes" Jan 27 18:40:41 crc kubenswrapper[5049]: I0127 18:40:41.667645 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5fb317-bb87-451b-b0d3-696a9b32b1cb" path="/var/lib/kubelet/pods/fb5fb317-bb87-451b-b0d3-696a9b32b1cb/volumes" Jan 27 18:40:43 crc kubenswrapper[5049]: I0127 18:40:43.646839 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:40:43 crc kubenswrapper[5049]: E0127 18:40:43.647356 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:40:47 crc kubenswrapper[5049]: I0127 18:40:47.033274 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-whn7b"] Jan 27 18:40:47 crc kubenswrapper[5049]: I0127 18:40:47.044585 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-whn7b"] Jan 27 18:40:47 crc kubenswrapper[5049]: I0127 18:40:47.656763 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e646e4-1dd4-4feb-b585-e6e85dec1822" path="/var/lib/kubelet/pods/85e646e4-1dd4-4feb-b585-e6e85dec1822/volumes" Jan 27 18:40:55 crc kubenswrapper[5049]: I0127 18:40:55.657804 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:40:55 crc kubenswrapper[5049]: E0127 18:40:55.658700 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:41:07 crc kubenswrapper[5049]: I0127 18:41:07.650550 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:41:07 crc kubenswrapper[5049]: E0127 18:41:07.652145 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:41:18 crc kubenswrapper[5049]: I0127 18:41:18.377072 5049 scope.go:117] "RemoveContainer" containerID="e621beb8761f27433d89f78a7a7ac981e929460cd13a3ac9bc18b425354365f0" Jan 27 18:41:18 crc kubenswrapper[5049]: I0127 18:41:18.416759 5049 scope.go:117] "RemoveContainer" containerID="0ca7b1bea2b2b7078eea98c9c9dce521fd838ea3cfe8a84e150c8ddc9d456419" Jan 27 18:41:18 crc kubenswrapper[5049]: I0127 18:41:18.463041 5049 scope.go:117] "RemoveContainer" containerID="f55509a753c5725b4a1b08693bf525e4bfada980fe3a3250510c8a0ed11eece2" Jan 27 18:41:19 crc kubenswrapper[5049]: I0127 18:41:19.646749 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:41:19 crc kubenswrapper[5049]: E0127 18:41:19.647722 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:41:30 crc kubenswrapper[5049]: I0127 18:41:30.646951 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:41:30 crc kubenswrapper[5049]: E0127 18:41:30.648464 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:41:43 crc kubenswrapper[5049]: I0127 18:41:43.647313 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:41:43 crc kubenswrapper[5049]: E0127 18:41:43.648360 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:41:48 crc kubenswrapper[5049]: I0127 18:41:48.048142 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9dchc"] Jan 27 18:41:48 crc kubenswrapper[5049]: I0127 18:41:48.064418 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b44a-account-create-update-4hmwk"] Jan 27 18:41:48 crc kubenswrapper[5049]: I0127 18:41:48.071983 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7h4kt"] Jan 27 18:41:48 crc kubenswrapper[5049]: I0127 18:41:48.080259 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b44a-account-create-update-4hmwk"] Jan 27 18:41:48 crc kubenswrapper[5049]: I0127 18:41:48.088530 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7h4kt"] Jan 27 18:41:48 crc kubenswrapper[5049]: I0127 18:41:48.098245 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-bg748"] Jan 27 18:41:48 crc kubenswrapper[5049]: I0127 18:41:48.106124 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9dchc"] Jan 27 18:41:48 crc kubenswrapper[5049]: I0127 18:41:48.113312 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-bg748"] Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.035642 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8dc3-account-create-update-lbgkl"] Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.047526 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6831-account-create-update-8stn7"] Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.055634 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8dc3-account-create-update-lbgkl"] Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.063157 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6831-account-create-update-8stn7"] Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.658459 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5a8837-eb01-432c-bb15-d4f4ff541037" path="/var/lib/kubelet/pods/7c5a8837-eb01-432c-bb15-d4f4ff541037/volumes" Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.659470 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="997aff65-10ec-4070-b64e-34a4e434dde9" path="/var/lib/kubelet/pods/997aff65-10ec-4070-b64e-34a4e434dde9/volumes" Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.660414 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a575262a-6daf-48b8-a260-386421b4d4bc" path="/var/lib/kubelet/pods/a575262a-6daf-48b8-a260-386421b4d4bc/volumes" Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.661798 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39c71ed-9976-4656-a5cd-16b9a340d80e" path="/var/lib/kubelet/pods/b39c71ed-9976-4656-a5cd-16b9a340d80e/volumes" Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.663261 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea96bc12-3a0b-4587-bcb1-ce13464facd7" path="/var/lib/kubelet/pods/ea96bc12-3a0b-4587-bcb1-ce13464facd7/volumes" Jan 27 18:41:49 crc kubenswrapper[5049]: I0127 18:41:49.663824 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa894f99-836d-4f71-853c-a90e0a049382" path="/var/lib/kubelet/pods/fa894f99-836d-4f71-853c-a90e0a049382/volumes" Jan 27 18:41:57 crc kubenswrapper[5049]: I0127 18:41:57.646085 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:41:57 crc kubenswrapper[5049]: E0127 18:41:57.646639 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:42:03 crc kubenswrapper[5049]: I0127 18:42:03.045511 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hdq4d"] Jan 27 18:42:03 crc kubenswrapper[5049]: I0127 18:42:03.054219 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hdq4d"] Jan 27 18:42:03 crc kubenswrapper[5049]: I0127 18:42:03.657333 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1910ee-4110-4567-b305-013b7a8f6102" path="/var/lib/kubelet/pods/bf1910ee-4110-4567-b305-013b7a8f6102/volumes" Jan 27 18:42:10 crc kubenswrapper[5049]: I0127 18:42:10.646714 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:42:10 crc kubenswrapper[5049]: E0127 18:42:10.649759 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:42:18 crc kubenswrapper[5049]: I0127 18:42:18.581091 5049 scope.go:117] "RemoveContainer" containerID="9a518cd8b0f60726222af34c220957d9be5854681d8b0f1ceab2cf5173ea31dd" Jan 27 18:42:18 crc kubenswrapper[5049]: I0127 18:42:18.611109 5049 scope.go:117] "RemoveContainer" containerID="4264ca35f7db40338b0d6745f81851d74e250a0db4f408edb773b3c73cc91844" Jan 27 18:42:18 crc kubenswrapper[5049]: I0127 18:42:18.683256 5049 scope.go:117] "RemoveContainer" containerID="3f44d425504126e2ae93492477c7492ea8dc2ba92bff8d3b5d011e8d38266140" Jan 27 18:42:18 crc kubenswrapper[5049]: I0127 18:42:18.741541 5049 scope.go:117] "RemoveContainer" containerID="aed5119a51351316143ef9f1ba64f4a18eb589ce36185c2aa03cfffdb8b6861a" Jan 27 18:42:18 crc kubenswrapper[5049]: I0127 18:42:18.774679 5049 scope.go:117] "RemoveContainer" containerID="a76ae62f2b484ca8afe3275e49dfc16b86893736204a5a34add176dd58b7493d" Jan 27 18:42:18 crc kubenswrapper[5049]: I0127 18:42:18.843073 5049 scope.go:117] "RemoveContainer" containerID="da8b1747c79193f3804a359e558e2984afd0b675a51e4165dbb48d64c3d74bc7" Jan 27 18:42:18 crc kubenswrapper[5049]: I0127 18:42:18.868762 5049 scope.go:117] "RemoveContainer" containerID="f96d0b91976932537c5d88df64e2600cbcfa5288b1c40fa840fa60285eb64256" Jan 27 18:42:22 crc kubenswrapper[5049]: I0127 18:42:22.067529 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pqbr"] Jan 27 18:42:22 crc kubenswrapper[5049]: I0127 18:42:22.083971 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7pqbr"] Jan 27 18:42:23 crc kubenswrapper[5049]: I0127 18:42:23.044367 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-s5msl"] Jan 27 18:42:23 crc kubenswrapper[5049]: I0127 18:42:23.049498 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-s5msl"] Jan 27 18:42:23 crc kubenswrapper[5049]: I0127 18:42:23.662941 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1711b5d9-b776-40c9-ad56-389cf4174909" path="/var/lib/kubelet/pods/1711b5d9-b776-40c9-ad56-389cf4174909/volumes" Jan 27 18:42:23 crc kubenswrapper[5049]: I0127 18:42:23.665801 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8349c90d-4c31-46c3-8400-fb68fc6f2810" path="/var/lib/kubelet/pods/8349c90d-4c31-46c3-8400-fb68fc6f2810/volumes" Jan 27 18:42:25 crc kubenswrapper[5049]: I0127 18:42:25.655372 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:42:26 crc kubenswrapper[5049]: I0127 18:42:26.785373 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"b0f60949c88e8ca99405409049f18e69de8736b36fdcaaaafc250436535c3831"} Jan 27 18:42:36 crc kubenswrapper[5049]: I0127 18:42:36.058801 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-nmjjt"] Jan 27 18:42:36 crc kubenswrapper[5049]: I0127 18:42:36.092494 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-nmjjt"] Jan 27 18:42:37 crc kubenswrapper[5049]: I0127 18:42:37.656943 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d58451c-87da-448a-8923-a6c89915ef90" path="/var/lib/kubelet/pods/7d58451c-87da-448a-8923-a6c89915ef90/volumes" Jan 27 18:43:19 crc kubenswrapper[5049]: I0127 18:43:19.010046 5049 scope.go:117] "RemoveContainer" containerID="e57eb623eee5cd4205aadacbfb109bd00bfa9c5002d90ea0f70cb47a08e187fe" Jan 27 18:43:19 crc kubenswrapper[5049]: I0127 18:43:19.062869 5049 scope.go:117] "RemoveContainer" containerID="da3083e155ad5bfa8f8f6b445b482cdd67c8b0ede50bb5837bd398ac00de15c1" Jan 27 18:43:19 crc kubenswrapper[5049]: I0127 18:43:19.099565 5049 scope.go:117] "RemoveContainer" containerID="0e373a1884cabeee0e2a0ac78949b8c6aac26c2a4c8df6657681bb8bcbc81848" Jan 27 18:43:19 crc kubenswrapper[5049]: I0127 18:43:19.164584 5049 scope.go:117] "RemoveContainer" containerID="0ca8c9b5f65773a92b01cbf3525e1b4344752b0501c7310325360ff93787dc3a" Jan 27 18:43:19 crc kubenswrapper[5049]: I0127 18:43:19.219591 5049 scope.go:117] "RemoveContainer" containerID="c36fb37279884d0b0cbe682dfc8c129a92681f1aacb0ce60120424ebada46e97" Jan 27 18:43:21 crc kubenswrapper[5049]: I0127 18:43:21.040838 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c611-account-create-update-zhhj6"] Jan 27 18:43:21 crc kubenswrapper[5049]: I0127 18:43:21.053693 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c611-account-create-update-zhhj6"] Jan 27 18:43:21 crc kubenswrapper[5049]: I0127 18:43:21.660216 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc0fff6-620d-4f4d-9378-b64d4e8c686c" path="/var/lib/kubelet/pods/9bc0fff6-620d-4f4d-9378-b64d4e8c686c/volumes" Jan 27 18:43:22 crc kubenswrapper[5049]: I0127 18:43:22.046136 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-z4wp7"] Jan 27 18:43:22 crc kubenswrapper[5049]: I0127 18:43:22.057533 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-z4wp7"] Jan 27 18:43:23 crc kubenswrapper[5049]: I0127 18:43:23.659718 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f552469c-419b-4eeb-9c8b-9b47f74d74c1" path="/var/lib/kubelet/pods/f552469c-419b-4eeb-9c8b-9b47f74d74c1/volumes" Jan 27 18:43:30 crc kubenswrapper[5049]: I0127 18:43:30.024740 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-ktn6t"] Jan 27 18:43:30 crc kubenswrapper[5049]: I0127 18:43:30.033390 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-ktn6t"] Jan 27 18:43:31 crc kubenswrapper[5049]: I0127 18:43:31.655783 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe4fdc4-d12a-4de8-b88c-298bfb0567b8" path="/var/lib/kubelet/pods/dbe4fdc4-d12a-4de8-b88c-298bfb0567b8/volumes" Jan 27 18:44:19 crc kubenswrapper[5049]: I0127 18:44:19.352417 5049 scope.go:117] "RemoveContainer" containerID="99fb83e0736ed0cb24cb4609318a981b9920652c6eea732f3673ea349c87b4bc" Jan 27 18:44:19 crc kubenswrapper[5049]: I0127 18:44:19.378096 5049 scope.go:117] "RemoveContainer" containerID="a5de39a689fab0e265f5a87b7755a03098b2caee501f1b62c595a9dc890d464f" Jan 27 18:44:19 crc kubenswrapper[5049]: I0127 18:44:19.419907 5049 scope.go:117] "RemoveContainer" containerID="9c3a55c28dfb494fff1ee02c60f86f56efcef10ce7ef21d11826522cb54bafc4" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.819382 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qrqnw"] Jan 27 18:44:40 crc kubenswrapper[5049]: E0127 18:44:40.820481 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerName="extract-utilities" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.820497 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerName="extract-utilities" Jan 27 18:44:40 crc kubenswrapper[5049]: E0127 18:44:40.820516 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerName="registry-server" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.820524 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerName="registry-server" Jan 27 18:44:40 crc kubenswrapper[5049]: E0127 18:44:40.820564 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerName="extract-content" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.820572 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerName="extract-content" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.820793 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d735449-e8de-4d1d-b259-3b6710d9d08a" containerName="registry-server" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.822446 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.835057 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qrqnw"] Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.889478 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-utilities\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.889532 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-catalog-content\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.889852 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9hq\" (UniqueName: \"kubernetes.io/projected/bc0368b3-5556-498d-89b6-e2f9a00b1c57-kube-api-access-pc9hq\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.991705 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc9hq\" (UniqueName: \"kubernetes.io/projected/bc0368b3-5556-498d-89b6-e2f9a00b1c57-kube-api-access-pc9hq\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.991780 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-utilities\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.991809 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-catalog-content\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.992247 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-utilities\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:40 crc kubenswrapper[5049]: I0127 18:44:40.992279 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-catalog-content\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:41 crc kubenswrapper[5049]: I0127 18:44:41.020001 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc9hq\" (UniqueName: \"kubernetes.io/projected/bc0368b3-5556-498d-89b6-e2f9a00b1c57-kube-api-access-pc9hq\") pod \"certified-operators-qrqnw\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:41 crc kubenswrapper[5049]: I0127 18:44:41.163826 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:41 crc kubenswrapper[5049]: I0127 18:44:41.722643 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qrqnw"] Jan 27 18:44:41 crc kubenswrapper[5049]: W0127 18:44:41.728878 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc0368b3_5556_498d_89b6_e2f9a00b1c57.slice/crio-05ec39b69d3baca6252b12c17bed8fa9ea79abfaadf8e8a547cdb0598ee812c5 WatchSource:0}: Error finding container 05ec39b69d3baca6252b12c17bed8fa9ea79abfaadf8e8a547cdb0598ee812c5: Status 404 returned error can't find the container with id 05ec39b69d3baca6252b12c17bed8fa9ea79abfaadf8e8a547cdb0598ee812c5 Jan 27 18:44:42 crc kubenswrapper[5049]: I0127 18:44:42.653226 5049 generic.go:334] "Generic (PLEG): container finished" podID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerID="7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f" exitCode=0 Jan 27 18:44:42 crc kubenswrapper[5049]: I0127 18:44:42.653330 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrqnw" event={"ID":"bc0368b3-5556-498d-89b6-e2f9a00b1c57","Type":"ContainerDied","Data":"7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f"} Jan 27 18:44:42 crc kubenswrapper[5049]: I0127 18:44:42.653627 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrqnw" event={"ID":"bc0368b3-5556-498d-89b6-e2f9a00b1c57","Type":"ContainerStarted","Data":"05ec39b69d3baca6252b12c17bed8fa9ea79abfaadf8e8a547cdb0598ee812c5"} Jan 27 18:44:42 crc kubenswrapper[5049]: I0127 18:44:42.655198 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:44:44 crc kubenswrapper[5049]: E0127 18:44:44.380872 5049 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc0368b3_5556_498d_89b6_e2f9a00b1c57.slice/crio-conmon-8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421.scope\": RecentStats: unable to find data in memory cache]" Jan 27 18:44:44 crc kubenswrapper[5049]: I0127 18:44:44.674487 5049 generic.go:334] "Generic (PLEG): container finished" podID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerID="8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421" exitCode=0 Jan 27 18:44:44 crc kubenswrapper[5049]: I0127 18:44:44.674547 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrqnw" event={"ID":"bc0368b3-5556-498d-89b6-e2f9a00b1c57","Type":"ContainerDied","Data":"8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421"} Jan 27 18:44:45 crc kubenswrapper[5049]: I0127 18:44:45.685609 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrqnw" event={"ID":"bc0368b3-5556-498d-89b6-e2f9a00b1c57","Type":"ContainerStarted","Data":"0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795"} Jan 27 18:44:45 crc kubenswrapper[5049]: I0127 18:44:45.710067 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qrqnw" podStartSLOduration=3.130016388 podStartE2EDuration="5.71005084s" podCreationTimestamp="2026-01-27 18:44:40 +0000 UTC" firstStartedPulling="2026-01-27 18:44:42.654946982 +0000 UTC m=+6457.753920531" lastFinishedPulling="2026-01-27 18:44:45.234981394 +0000 UTC m=+6460.333954983" observedRunningTime="2026-01-27 18:44:45.70724809 +0000 UTC m=+6460.806221649" watchObservedRunningTime="2026-01-27 18:44:45.71005084 +0000 UTC m=+6460.809024389" Jan 27 18:44:47 crc kubenswrapper[5049]: I0127 18:44:47.781613 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:44:47 crc kubenswrapper[5049]: I0127 18:44:47.782444 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:44:51 crc kubenswrapper[5049]: I0127 18:44:51.165035 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:51 crc kubenswrapper[5049]: I0127 18:44:51.165357 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:51 crc kubenswrapper[5049]: I0127 18:44:51.245575 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:51 crc kubenswrapper[5049]: I0127 18:44:51.788111 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:51 crc kubenswrapper[5049]: I0127 18:44:51.834882 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qrqnw"] Jan 27 18:44:53 crc kubenswrapper[5049]: I0127 18:44:53.756894 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qrqnw" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerName="registry-server" containerID="cri-o://0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795" gracePeriod=2 Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.284341 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.367008 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc9hq\" (UniqueName: \"kubernetes.io/projected/bc0368b3-5556-498d-89b6-e2f9a00b1c57-kube-api-access-pc9hq\") pod \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.367370 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-catalog-content\") pod \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.367429 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-utilities\") pod \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\" (UID: \"bc0368b3-5556-498d-89b6-e2f9a00b1c57\") " Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.368368 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-utilities" (OuterVolumeSpecName: "utilities") pod "bc0368b3-5556-498d-89b6-e2f9a00b1c57" (UID: "bc0368b3-5556-498d-89b6-e2f9a00b1c57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.373988 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc0368b3-5556-498d-89b6-e2f9a00b1c57-kube-api-access-pc9hq" (OuterVolumeSpecName: "kube-api-access-pc9hq") pod "bc0368b3-5556-498d-89b6-e2f9a00b1c57" (UID: "bc0368b3-5556-498d-89b6-e2f9a00b1c57"). InnerVolumeSpecName "kube-api-access-pc9hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.411580 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc0368b3-5556-498d-89b6-e2f9a00b1c57" (UID: "bc0368b3-5556-498d-89b6-e2f9a00b1c57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.469184 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.469213 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc0368b3-5556-498d-89b6-e2f9a00b1c57-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:54 crc kubenswrapper[5049]: I0127 18:44:54.469224 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc9hq\" (UniqueName: \"kubernetes.io/projected/bc0368b3-5556-498d-89b6-e2f9a00b1c57-kube-api-access-pc9hq\") on node \"crc\" DevicePath \"\"" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:54.766125 5049 generic.go:334] "Generic (PLEG): container finished" podID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerID="0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795" exitCode=0 Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:54.766160 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrqnw" event={"ID":"bc0368b3-5556-498d-89b6-e2f9a00b1c57","Type":"ContainerDied","Data":"0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795"} Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:54.766162 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qrqnw" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:54.766196 5049 scope.go:117] "RemoveContainer" containerID="0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:54.766186 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qrqnw" event={"ID":"bc0368b3-5556-498d-89b6-e2f9a00b1c57","Type":"ContainerDied","Data":"05ec39b69d3baca6252b12c17bed8fa9ea79abfaadf8e8a547cdb0598ee812c5"} Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:54.819539 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qrqnw"] Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:54.820195 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qrqnw"] Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.186888 5049 scope.go:117] "RemoveContainer" containerID="8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.221450 5049 scope.go:117] "RemoveContainer" containerID="7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.274264 5049 scope.go:117] "RemoveContainer" containerID="0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795" Jan 27 18:44:55 crc kubenswrapper[5049]: E0127 18:44:55.274896 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795\": container with ID starting with 0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795 not found: ID does not exist" containerID="0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.274933 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795"} err="failed to get container status \"0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795\": rpc error: code = NotFound desc = could not find container \"0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795\": container with ID starting with 0fc394e04927d4653f394761c120c205a126948ab90270e9f76a168e3b99a795 not found: ID does not exist" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.274957 5049 scope.go:117] "RemoveContainer" containerID="8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421" Jan 27 18:44:55 crc kubenswrapper[5049]: E0127 18:44:55.275453 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421\": container with ID starting with 8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421 not found: ID does not exist" containerID="8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.275502 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421"} err="failed to get container status \"8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421\": rpc error: code = NotFound desc = could not find container \"8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421\": container with ID starting with 8a64fbd008df9cf71b421dc6b213dcdb8b785c86bdaf5032c50cea4d64801421 not found: ID does not exist" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.275531 5049 scope.go:117] "RemoveContainer" containerID="7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f" Jan 27 18:44:55 crc kubenswrapper[5049]: E0127 18:44:55.277038 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f\": container with ID starting with 7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f not found: ID does not exist" containerID="7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.277060 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f"} err="failed to get container status \"7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f\": rpc error: code = NotFound desc = could not find container \"7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f\": container with ID starting with 7e3b446cce878e69d655ad938c6d8fb3eee9f44e6388f5c7e4c90f4190def74f not found: ID does not exist" Jan 27 18:44:55 crc kubenswrapper[5049]: I0127 18:44:55.670654 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" path="/var/lib/kubelet/pods/bc0368b3-5556-498d-89b6-e2f9a00b1c57/volumes" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.163706 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx"] Jan 27 18:45:00 crc kubenswrapper[5049]: E0127 18:45:00.164539 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerName="extract-content" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.164554 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerName="extract-content" Jan 27 18:45:00 crc kubenswrapper[5049]: E0127 18:45:00.164565 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerName="registry-server" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.164571 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerName="registry-server" Jan 27 18:45:00 crc kubenswrapper[5049]: E0127 18:45:00.164590 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerName="extract-utilities" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.164596 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerName="extract-utilities" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.164793 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc0368b3-5556-498d-89b6-e2f9a00b1c57" containerName="registry-server" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.165478 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.172256 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx"] Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.197360 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.197634 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.298383 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-secret-volume\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.298538 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmq7\" (UniqueName: \"kubernetes.io/projected/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-kube-api-access-8bmq7\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.298597 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-config-volume\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.400153 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bmq7\" (UniqueName: \"kubernetes.io/projected/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-kube-api-access-8bmq7\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.400318 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-config-volume\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.400534 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-secret-volume\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.402125 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-config-volume\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.415530 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-secret-volume\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.427509 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bmq7\" (UniqueName: \"kubernetes.io/projected/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-kube-api-access-8bmq7\") pod \"collect-profiles-29492325-xrxdx\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.522395 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:00 crc kubenswrapper[5049]: I0127 18:45:00.940464 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx"] Jan 27 18:45:00 crc kubenswrapper[5049]: W0127 18:45:00.945136 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6070e364_ce14_4426_9a5d_29e7fe1c4d5d.slice/crio-3c4c40653be4d6e505d84be1e4060061365f77e82caa0478e849ed2b2f3135e7 WatchSource:0}: Error finding container 3c4c40653be4d6e505d84be1e4060061365f77e82caa0478e849ed2b2f3135e7: Status 404 returned error can't find the container with id 3c4c40653be4d6e505d84be1e4060061365f77e82caa0478e849ed2b2f3135e7 Jan 27 18:45:01 crc kubenswrapper[5049]: I0127 18:45:01.831041 5049 generic.go:334] "Generic (PLEG): container finished" podID="6070e364-ce14-4426-9a5d-29e7fe1c4d5d" containerID="7e6dc244639ec28c3a3e385a107ca03985d591dda3d4272317b0a32ae0d375ca" exitCode=0 Jan 27 18:45:01 crc kubenswrapper[5049]: I0127 18:45:01.831090 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" event={"ID":"6070e364-ce14-4426-9a5d-29e7fe1c4d5d","Type":"ContainerDied","Data":"7e6dc244639ec28c3a3e385a107ca03985d591dda3d4272317b0a32ae0d375ca"} Jan 27 18:45:01 crc kubenswrapper[5049]: I0127 18:45:01.831345 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" event={"ID":"6070e364-ce14-4426-9a5d-29e7fe1c4d5d","Type":"ContainerStarted","Data":"3c4c40653be4d6e505d84be1e4060061365f77e82caa0478e849ed2b2f3135e7"} Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.137473 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.263977 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-secret-volume\") pod \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.264103 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bmq7\" (UniqueName: \"kubernetes.io/projected/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-kube-api-access-8bmq7\") pod \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.264234 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-config-volume\") pod \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\" (UID: \"6070e364-ce14-4426-9a5d-29e7fe1c4d5d\") " Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.264993 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-config-volume" (OuterVolumeSpecName: "config-volume") pod "6070e364-ce14-4426-9a5d-29e7fe1c4d5d" (UID: "6070e364-ce14-4426-9a5d-29e7fe1c4d5d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.270453 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6070e364-ce14-4426-9a5d-29e7fe1c4d5d" (UID: "6070e364-ce14-4426-9a5d-29e7fe1c4d5d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.270754 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-kube-api-access-8bmq7" (OuterVolumeSpecName: "kube-api-access-8bmq7") pod "6070e364-ce14-4426-9a5d-29e7fe1c4d5d" (UID: "6070e364-ce14-4426-9a5d-29e7fe1c4d5d"). InnerVolumeSpecName "kube-api-access-8bmq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.367407 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.367450 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bmq7\" (UniqueName: \"kubernetes.io/projected/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-kube-api-access-8bmq7\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.367463 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6070e364-ce14-4426-9a5d-29e7fe1c4d5d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.853928 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" event={"ID":"6070e364-ce14-4426-9a5d-29e7fe1c4d5d","Type":"ContainerDied","Data":"3c4c40653be4d6e505d84be1e4060061365f77e82caa0478e849ed2b2f3135e7"} Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.853983 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c4c40653be4d6e505d84be1e4060061365f77e82caa0478e849ed2b2f3135e7" Jan 27 18:45:03 crc kubenswrapper[5049]: I0127 18:45:03.854126 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx" Jan 27 18:45:04 crc kubenswrapper[5049]: I0127 18:45:04.260095 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq"] Jan 27 18:45:04 crc kubenswrapper[5049]: I0127 18:45:04.271849 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492280-ccpmq"] Jan 27 18:45:05 crc kubenswrapper[5049]: I0127 18:45:05.658539 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4812a4-0a7d-494d-ad4f-c8350c518fbd" path="/var/lib/kubelet/pods/7a4812a4-0a7d-494d-ad4f-c8350c518fbd/volumes" Jan 27 18:45:17 crc kubenswrapper[5049]: I0127 18:45:17.781960 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:45:17 crc kubenswrapper[5049]: I0127 18:45:17.783645 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:45:19 crc kubenswrapper[5049]: I0127 18:45:19.551769 5049 scope.go:117] "RemoveContainer" containerID="646eb0a2de210109356a19fdb5da489f62c378817e42b2e9db68dd5b2c3d026d" Jan 27 18:45:47 crc kubenswrapper[5049]: I0127 18:45:47.781039 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:45:47 crc kubenswrapper[5049]: I0127 18:45:47.781643 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:45:47 crc kubenswrapper[5049]: I0127 18:45:47.781717 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:45:47 crc kubenswrapper[5049]: I0127 18:45:47.782516 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0f60949c88e8ca99405409049f18e69de8736b36fdcaaaafc250436535c3831"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:45:47 crc kubenswrapper[5049]: I0127 18:45:47.782585 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://b0f60949c88e8ca99405409049f18e69de8736b36fdcaaaafc250436535c3831" gracePeriod=600 Jan 27 18:45:48 crc kubenswrapper[5049]: I0127 18:45:48.284628 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="b0f60949c88e8ca99405409049f18e69de8736b36fdcaaaafc250436535c3831" exitCode=0 Jan 27 18:45:48 crc kubenswrapper[5049]: I0127 18:45:48.284741 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"b0f60949c88e8ca99405409049f18e69de8736b36fdcaaaafc250436535c3831"} Jan 27 18:45:48 crc kubenswrapper[5049]: I0127 18:45:48.285082 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b"} Jan 27 18:45:48 crc kubenswrapper[5049]: I0127 18:45:48.285112 5049 scope.go:117] "RemoveContainer" containerID="fdfbc0211f6b31b6056c1aac0a2000082f6134f644f70497a191f341ad3e2ff8" Jan 27 18:46:15 crc kubenswrapper[5049]: I0127 18:46:15.037774 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-s84fx"] Jan 27 18:46:15 crc kubenswrapper[5049]: I0127 18:46:15.048828 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-s84fx"] Jan 27 18:46:15 crc kubenswrapper[5049]: I0127 18:46:15.667044 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d8f816-9132-4994-8c07-0dfa0dbc1726" path="/var/lib/kubelet/pods/b2d8f816-9132-4994-8c07-0dfa0dbc1726/volumes" Jan 27 18:46:16 crc kubenswrapper[5049]: I0127 18:46:16.034168 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-e6f8-account-create-update-q64ft"] Jan 27 18:46:16 crc kubenswrapper[5049]: I0127 18:46:16.043531 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-e6f8-account-create-update-q64ft"] Jan 27 18:46:17 crc kubenswrapper[5049]: I0127 18:46:17.659890 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0442b3-58dd-4659-b3cb-cbc90ab9b80b" path="/var/lib/kubelet/pods/8a0442b3-58dd-4659-b3cb-cbc90ab9b80b/volumes" Jan 27 18:46:19 crc kubenswrapper[5049]: I0127 18:46:19.622944 5049 scope.go:117] "RemoveContainer" containerID="b390c4f3e448ca2bf751f06df616a787fc203572266a96b1b26783641e722e66" Jan 27 18:46:19 crc kubenswrapper[5049]: I0127 18:46:19.662624 5049 scope.go:117] "RemoveContainer" containerID="42479d220ad1c922424f712ae3dd2d4b906af1f358d576179b61b401f6600b18" Jan 27 18:46:23 crc kubenswrapper[5049]: I0127 18:46:23.048222 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-jrf2m"] Jan 27 18:46:23 crc kubenswrapper[5049]: I0127 18:46:23.055709 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-jrf2m"] Jan 27 18:46:23 crc kubenswrapper[5049]: I0127 18:46:23.661036 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c7681e-c718-4c6a-9646-aba9f96018b0" path="/var/lib/kubelet/pods/13c7681e-c718-4c6a-9646-aba9f96018b0/volumes" Jan 27 18:46:24 crc kubenswrapper[5049]: I0127 18:46:24.029169 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-abd2-account-create-update-psv4r"] Jan 27 18:46:24 crc kubenswrapper[5049]: I0127 18:46:24.038791 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-abd2-account-create-update-psv4r"] Jan 27 18:46:25 crc kubenswrapper[5049]: I0127 18:46:25.659545 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11" path="/var/lib/kubelet/pods/7a7eb388-c1a3-4277-b14c-1fbb8eb4bf11/volumes" Jan 27 18:47:11 crc kubenswrapper[5049]: I0127 18:47:11.043992 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-qg6hr"] Jan 27 18:47:11 crc kubenswrapper[5049]: I0127 18:47:11.053597 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-qg6hr"] Jan 27 18:47:11 crc kubenswrapper[5049]: I0127 18:47:11.657273 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aedefdb-f36a-4658-941f-3eac0242a8c9" path="/var/lib/kubelet/pods/2aedefdb-f36a-4658-941f-3eac0242a8c9/volumes" Jan 27 18:47:19 crc kubenswrapper[5049]: I0127 18:47:19.742406 5049 scope.go:117] "RemoveContainer" containerID="25a88ec595bf2a75caacd49036e51612a39616ba6c9ea00bf6554e2a01c4c5ae" Jan 27 18:47:19 crc kubenswrapper[5049]: I0127 18:47:19.784010 5049 scope.go:117] "RemoveContainer" containerID="3c7d16de5aed4260472af85fcc9d65344873dab7a88a997ce70e94a026bc57f7" Jan 27 18:47:19 crc kubenswrapper[5049]: I0127 18:47:19.818456 5049 scope.go:117] "RemoveContainer" containerID="907be253e26f102f604d88a6065405b2781f4cad077fe45d739c061608fc9cf6" Jan 27 18:47:19 crc kubenswrapper[5049]: I0127 18:47:19.863580 5049 scope.go:117] "RemoveContainer" containerID="f682a7222239980702714a322a48e281797f6e25f04a38e531d90df3412d53da" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.229114 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xkv65"] Jan 27 18:47:49 crc kubenswrapper[5049]: E0127 18:47:49.230315 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6070e364-ce14-4426-9a5d-29e7fe1c4d5d" containerName="collect-profiles" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.230332 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="6070e364-ce14-4426-9a5d-29e7fe1c4d5d" containerName="collect-profiles" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.230596 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="6070e364-ce14-4426-9a5d-29e7fe1c4d5d" containerName="collect-profiles" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.232371 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.249481 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkv65"] Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.335649 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-utilities\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.335891 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dpg4\" (UniqueName: \"kubernetes.io/projected/d2ea20fb-0f3e-4544-8827-5f4826aa5613-kube-api-access-8dpg4\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.336203 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-catalog-content\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.438634 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-catalog-content\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.438855 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-utilities\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.438986 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dpg4\" (UniqueName: \"kubernetes.io/projected/d2ea20fb-0f3e-4544-8827-5f4826aa5613-kube-api-access-8dpg4\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.439778 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-utilities\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.439816 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-catalog-content\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.465601 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dpg4\" (UniqueName: \"kubernetes.io/projected/d2ea20fb-0f3e-4544-8827-5f4826aa5613-kube-api-access-8dpg4\") pod \"redhat-marketplace-xkv65\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:49 crc kubenswrapper[5049]: I0127 18:47:49.556204 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:50 crc kubenswrapper[5049]: I0127 18:47:50.023963 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkv65"] Jan 27 18:47:50 crc kubenswrapper[5049]: I0127 18:47:50.552116 5049 generic.go:334] "Generic (PLEG): container finished" podID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerID="1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625" exitCode=0 Jan 27 18:47:50 crc kubenswrapper[5049]: I0127 18:47:50.552154 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkv65" event={"ID":"d2ea20fb-0f3e-4544-8827-5f4826aa5613","Type":"ContainerDied","Data":"1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625"} Jan 27 18:47:50 crc kubenswrapper[5049]: I0127 18:47:50.552176 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkv65" event={"ID":"d2ea20fb-0f3e-4544-8827-5f4826aa5613","Type":"ContainerStarted","Data":"b222d0f5976fe8ef64d9e8d3946f388b454ddd2f7f3343a364bb438036ece5eb"} Jan 27 18:47:52 crc kubenswrapper[5049]: I0127 18:47:52.570177 5049 generic.go:334] "Generic (PLEG): container finished" podID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerID="4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928" exitCode=0 Jan 27 18:47:52 crc kubenswrapper[5049]: I0127 18:47:52.570254 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkv65" event={"ID":"d2ea20fb-0f3e-4544-8827-5f4826aa5613","Type":"ContainerDied","Data":"4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928"} Jan 27 18:47:53 crc kubenswrapper[5049]: I0127 18:47:53.583531 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkv65" event={"ID":"d2ea20fb-0f3e-4544-8827-5f4826aa5613","Type":"ContainerStarted","Data":"29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3"} Jan 27 18:47:53 crc kubenswrapper[5049]: I0127 18:47:53.630718 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xkv65" podStartSLOduration=2.06551544 podStartE2EDuration="4.630669109s" podCreationTimestamp="2026-01-27 18:47:49 +0000 UTC" firstStartedPulling="2026-01-27 18:47:50.554576414 +0000 UTC m=+6645.653549963" lastFinishedPulling="2026-01-27 18:47:53.119730083 +0000 UTC m=+6648.218703632" observedRunningTime="2026-01-27 18:47:53.617923648 +0000 UTC m=+6648.716897217" watchObservedRunningTime="2026-01-27 18:47:53.630669109 +0000 UTC m=+6648.729642698" Jan 27 18:47:59 crc kubenswrapper[5049]: I0127 18:47:59.557100 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:59 crc kubenswrapper[5049]: I0127 18:47:59.557651 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:59 crc kubenswrapper[5049]: I0127 18:47:59.602971 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:59 crc kubenswrapper[5049]: I0127 18:47:59.678727 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:47:59 crc kubenswrapper[5049]: I0127 18:47:59.846368 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkv65"] Jan 27 18:48:01 crc kubenswrapper[5049]: I0127 18:48:01.658439 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xkv65" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerName="registry-server" containerID="cri-o://29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3" gracePeriod=2 Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.131131 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.255840 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zrb9h"] Jan 27 18:48:02 crc kubenswrapper[5049]: E0127 18:48:02.256323 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerName="extract-content" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.256350 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerName="extract-content" Jan 27 18:48:02 crc kubenswrapper[5049]: E0127 18:48:02.256366 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerName="registry-server" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.256373 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerName="registry-server" Jan 27 18:48:02 crc kubenswrapper[5049]: E0127 18:48:02.256408 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerName="extract-utilities" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.256418 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerName="extract-utilities" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.256656 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerName="registry-server" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.258393 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.284016 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrb9h"] Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.309800 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-utilities\") pod \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.309890 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-catalog-content\") pod \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.309926 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dpg4\" (UniqueName: \"kubernetes.io/projected/d2ea20fb-0f3e-4544-8827-5f4826aa5613-kube-api-access-8dpg4\") pod \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\" (UID: \"d2ea20fb-0f3e-4544-8827-5f4826aa5613\") " Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.310433 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-utilities\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.310552 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qrl\" (UniqueName: \"kubernetes.io/projected/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-kube-api-access-w9qrl\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.310606 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-catalog-content\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.310881 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-utilities" (OuterVolumeSpecName: "utilities") pod "d2ea20fb-0f3e-4544-8827-5f4826aa5613" (UID: "d2ea20fb-0f3e-4544-8827-5f4826aa5613"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.316993 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ea20fb-0f3e-4544-8827-5f4826aa5613-kube-api-access-8dpg4" (OuterVolumeSpecName: "kube-api-access-8dpg4") pod "d2ea20fb-0f3e-4544-8827-5f4826aa5613" (UID: "d2ea20fb-0f3e-4544-8827-5f4826aa5613"). InnerVolumeSpecName "kube-api-access-8dpg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.412705 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-utilities\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.412802 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9qrl\" (UniqueName: \"kubernetes.io/projected/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-kube-api-access-w9qrl\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.412835 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-catalog-content\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.412942 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.412953 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dpg4\" (UniqueName: \"kubernetes.io/projected/d2ea20fb-0f3e-4544-8827-5f4826aa5613-kube-api-access-8dpg4\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.413255 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-utilities\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.413465 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-catalog-content\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.430162 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9qrl\" (UniqueName: \"kubernetes.io/projected/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-kube-api-access-w9qrl\") pod \"redhat-operators-zrb9h\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.479334 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2ea20fb-0f3e-4544-8827-5f4826aa5613" (UID: "d2ea20fb-0f3e-4544-8827-5f4826aa5613"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.514545 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2ea20fb-0f3e-4544-8827-5f4826aa5613-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.590879 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.667616 5049 generic.go:334] "Generic (PLEG): container finished" podID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" containerID="29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3" exitCode=0 Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.667656 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkv65" event={"ID":"d2ea20fb-0f3e-4544-8827-5f4826aa5613","Type":"ContainerDied","Data":"29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3"} Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.667696 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xkv65" event={"ID":"d2ea20fb-0f3e-4544-8827-5f4826aa5613","Type":"ContainerDied","Data":"b222d0f5976fe8ef64d9e8d3946f388b454ddd2f7f3343a364bb438036ece5eb"} Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.667717 5049 scope.go:117] "RemoveContainer" containerID="29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.667730 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xkv65" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.700525 5049 scope.go:117] "RemoveContainer" containerID="4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.723403 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkv65"] Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.730760 5049 scope.go:117] "RemoveContainer" containerID="1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.735759 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xkv65"] Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.785880 5049 scope.go:117] "RemoveContainer" containerID="29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3" Jan 27 18:48:02 crc kubenswrapper[5049]: E0127 18:48:02.787201 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3\": container with ID starting with 29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3 not found: ID does not exist" containerID="29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.787230 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3"} err="failed to get container status \"29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3\": rpc error: code = NotFound desc = could not find container \"29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3\": container with ID starting with 29c619f84e220dce543a4d289f80e16814b631e39e6189d81d6427d4975742c3 not found: ID does not exist" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.787250 5049 scope.go:117] "RemoveContainer" containerID="4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928" Jan 27 18:48:02 crc kubenswrapper[5049]: E0127 18:48:02.787823 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928\": container with ID starting with 4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928 not found: ID does not exist" containerID="4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.787875 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928"} err="failed to get container status \"4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928\": rpc error: code = NotFound desc = could not find container \"4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928\": container with ID starting with 4c5f997a40e5221f7bdca81724f0840b3d73d6741273dc147d7310274eb06928 not found: ID does not exist" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.787909 5049 scope.go:117] "RemoveContainer" containerID="1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625" Jan 27 18:48:02 crc kubenswrapper[5049]: E0127 18:48:02.788310 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625\": container with ID starting with 1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625 not found: ID does not exist" containerID="1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625" Jan 27 18:48:02 crc kubenswrapper[5049]: I0127 18:48:02.788349 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625"} err="failed to get container status \"1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625\": rpc error: code = NotFound desc = could not find container \"1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625\": container with ID starting with 1121dcc54bef45b81a310420b84872dd48a950ca62863d59d5dca61f9ab7d625 not found: ID does not exist" Jan 27 18:48:03 crc kubenswrapper[5049]: I0127 18:48:03.061480 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zrb9h"] Jan 27 18:48:03 crc kubenswrapper[5049]: I0127 18:48:03.657364 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ea20fb-0f3e-4544-8827-5f4826aa5613" path="/var/lib/kubelet/pods/d2ea20fb-0f3e-4544-8827-5f4826aa5613/volumes" Jan 27 18:48:03 crc kubenswrapper[5049]: I0127 18:48:03.682914 5049 generic.go:334] "Generic (PLEG): container finished" podID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerID="633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e" exitCode=0 Jan 27 18:48:03 crc kubenswrapper[5049]: I0127 18:48:03.682960 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrb9h" event={"ID":"3cfcdbb6-9e41-4ee4-962d-253c6e79b659","Type":"ContainerDied","Data":"633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e"} Jan 27 18:48:03 crc kubenswrapper[5049]: I0127 18:48:03.682989 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrb9h" event={"ID":"3cfcdbb6-9e41-4ee4-962d-253c6e79b659","Type":"ContainerStarted","Data":"bf5e5e7bbc3fc5af5fd8c7f10aaa54db36eaacff4045d25fed1d7915bc138981"} Jan 27 18:48:05 crc kubenswrapper[5049]: I0127 18:48:05.700945 5049 generic.go:334] "Generic (PLEG): container finished" podID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerID="ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1" exitCode=0 Jan 27 18:48:05 crc kubenswrapper[5049]: I0127 18:48:05.701017 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrb9h" event={"ID":"3cfcdbb6-9e41-4ee4-962d-253c6e79b659","Type":"ContainerDied","Data":"ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1"} Jan 27 18:48:06 crc kubenswrapper[5049]: I0127 18:48:06.712383 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrb9h" event={"ID":"3cfcdbb6-9e41-4ee4-962d-253c6e79b659","Type":"ContainerStarted","Data":"37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2"} Jan 27 18:48:06 crc kubenswrapper[5049]: I0127 18:48:06.740056 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zrb9h" podStartSLOduration=2.072528947 podStartE2EDuration="4.740030204s" podCreationTimestamp="2026-01-27 18:48:02 +0000 UTC" firstStartedPulling="2026-01-27 18:48:03.685441049 +0000 UTC m=+6658.784414598" lastFinishedPulling="2026-01-27 18:48:06.352942256 +0000 UTC m=+6661.451915855" observedRunningTime="2026-01-27 18:48:06.734659832 +0000 UTC m=+6661.833633401" watchObservedRunningTime="2026-01-27 18:48:06.740030204 +0000 UTC m=+6661.839003753" Jan 27 18:48:12 crc kubenswrapper[5049]: I0127 18:48:12.591292 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:12 crc kubenswrapper[5049]: I0127 18:48:12.591714 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:12 crc kubenswrapper[5049]: I0127 18:48:12.638393 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:12 crc kubenswrapper[5049]: I0127 18:48:12.814209 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:12 crc kubenswrapper[5049]: I0127 18:48:12.875314 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrb9h"] Jan 27 18:48:14 crc kubenswrapper[5049]: I0127 18:48:14.787979 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zrb9h" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerName="registry-server" containerID="cri-o://37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2" gracePeriod=2 Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.258801 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.392344 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-utilities\") pod \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.392560 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9qrl\" (UniqueName: \"kubernetes.io/projected/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-kube-api-access-w9qrl\") pod \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.392644 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-catalog-content\") pod \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\" (UID: \"3cfcdbb6-9e41-4ee4-962d-253c6e79b659\") " Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.393292 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-utilities" (OuterVolumeSpecName: "utilities") pod "3cfcdbb6-9e41-4ee4-962d-253c6e79b659" (UID: "3cfcdbb6-9e41-4ee4-962d-253c6e79b659"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.399897 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-kube-api-access-w9qrl" (OuterVolumeSpecName: "kube-api-access-w9qrl") pod "3cfcdbb6-9e41-4ee4-962d-253c6e79b659" (UID: "3cfcdbb6-9e41-4ee4-962d-253c6e79b659"). InnerVolumeSpecName "kube-api-access-w9qrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.495112 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.495150 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9qrl\" (UniqueName: \"kubernetes.io/projected/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-kube-api-access-w9qrl\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.797406 5049 generic.go:334] "Generic (PLEG): container finished" podID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerID="37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2" exitCode=0 Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.797459 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrb9h" event={"ID":"3cfcdbb6-9e41-4ee4-962d-253c6e79b659","Type":"ContainerDied","Data":"37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2"} Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.797490 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zrb9h" event={"ID":"3cfcdbb6-9e41-4ee4-962d-253c6e79b659","Type":"ContainerDied","Data":"bf5e5e7bbc3fc5af5fd8c7f10aaa54db36eaacff4045d25fed1d7915bc138981"} Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.797511 5049 scope.go:117] "RemoveContainer" containerID="37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.797624 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zrb9h" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.818067 5049 scope.go:117] "RemoveContainer" containerID="ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.842078 5049 scope.go:117] "RemoveContainer" containerID="633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.893337 5049 scope.go:117] "RemoveContainer" containerID="37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2" Jan 27 18:48:15 crc kubenswrapper[5049]: E0127 18:48:15.893844 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2\": container with ID starting with 37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2 not found: ID does not exist" containerID="37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.893888 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2"} err="failed to get container status \"37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2\": rpc error: code = NotFound desc = could not find container \"37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2\": container with ID starting with 37914560b654487815a8f55bf4b1c48990b9fae52b73c0fb2db559517a330bc2 not found: ID does not exist" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.893917 5049 scope.go:117] "RemoveContainer" containerID="ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1" Jan 27 18:48:15 crc kubenswrapper[5049]: E0127 18:48:15.894276 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1\": container with ID starting with ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1 not found: ID does not exist" containerID="ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.894320 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1"} err="failed to get container status \"ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1\": rpc error: code = NotFound desc = could not find container \"ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1\": container with ID starting with ab28632f0a4cc80494ff48624c1821f8e3715870063fa891db9deca765b82cb1 not found: ID does not exist" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.894346 5049 scope.go:117] "RemoveContainer" containerID="633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e" Jan 27 18:48:15 crc kubenswrapper[5049]: E0127 18:48:15.894563 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e\": container with ID starting with 633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e not found: ID does not exist" containerID="633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e" Jan 27 18:48:15 crc kubenswrapper[5049]: I0127 18:48:15.894593 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e"} err="failed to get container status \"633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e\": rpc error: code = NotFound desc = could not find container \"633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e\": container with ID starting with 633926b7da1164cbe0fd559843de5b7cc09728ae2b6063085ce012092aff186e not found: ID does not exist" Jan 27 18:48:16 crc kubenswrapper[5049]: I0127 18:48:16.273282 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cfcdbb6-9e41-4ee4-962d-253c6e79b659" (UID: "3cfcdbb6-9e41-4ee4-962d-253c6e79b659"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:48:16 crc kubenswrapper[5049]: I0127 18:48:16.311121 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cfcdbb6-9e41-4ee4-962d-253c6e79b659-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:48:16 crc kubenswrapper[5049]: I0127 18:48:16.441899 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zrb9h"] Jan 27 18:48:16 crc kubenswrapper[5049]: I0127 18:48:16.451504 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zrb9h"] Jan 27 18:48:17 crc kubenswrapper[5049]: I0127 18:48:17.663031 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" path="/var/lib/kubelet/pods/3cfcdbb6-9e41-4ee4-962d-253c6e79b659/volumes" Jan 27 18:48:17 crc kubenswrapper[5049]: I0127 18:48:17.781350 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:48:17 crc kubenswrapper[5049]: I0127 18:48:17.781781 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:48:47 crc kubenswrapper[5049]: I0127 18:48:47.781714 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:48:47 crc kubenswrapper[5049]: I0127 18:48:47.782219 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:49:17 crc kubenswrapper[5049]: I0127 18:49:17.781127 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:49:17 crc kubenswrapper[5049]: I0127 18:49:17.781846 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:49:17 crc kubenswrapper[5049]: I0127 18:49:17.781892 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:49:17 crc kubenswrapper[5049]: I0127 18:49:17.782768 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:49:17 crc kubenswrapper[5049]: I0127 18:49:17.782837 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" gracePeriod=600 Jan 27 18:49:17 crc kubenswrapper[5049]: E0127 18:49:17.915007 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:49:18 crc kubenswrapper[5049]: I0127 18:49:18.328265 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" exitCode=0 Jan 27 18:49:18 crc kubenswrapper[5049]: I0127 18:49:18.328317 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b"} Jan 27 18:49:18 crc kubenswrapper[5049]: I0127 18:49:18.328354 5049 scope.go:117] "RemoveContainer" containerID="b0f60949c88e8ca99405409049f18e69de8736b36fdcaaaafc250436535c3831" Jan 27 18:49:18 crc kubenswrapper[5049]: I0127 18:49:18.329043 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:49:18 crc kubenswrapper[5049]: E0127 18:49:18.329485 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:49:30 crc kubenswrapper[5049]: I0127 18:49:30.646750 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:49:30 crc kubenswrapper[5049]: E0127 18:49:30.647818 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:49:44 crc kubenswrapper[5049]: I0127 18:49:44.645959 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:49:44 crc kubenswrapper[5049]: E0127 18:49:44.647111 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:49:55 crc kubenswrapper[5049]: I0127 18:49:55.654247 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:49:55 crc kubenswrapper[5049]: E0127 18:49:55.655206 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:50:06 crc kubenswrapper[5049]: I0127 18:50:06.646270 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:50:06 crc kubenswrapper[5049]: E0127 18:50:06.647479 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:50:20 crc kubenswrapper[5049]: I0127 18:50:20.646074 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:50:20 crc kubenswrapper[5049]: E0127 18:50:20.647979 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:50:31 crc kubenswrapper[5049]: I0127 18:50:31.646883 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:50:31 crc kubenswrapper[5049]: E0127 18:50:31.648146 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:50:45 crc kubenswrapper[5049]: I0127 18:50:45.651821 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:50:45 crc kubenswrapper[5049]: E0127 18:50:45.654013 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:50:56 crc kubenswrapper[5049]: I0127 18:50:56.646334 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:50:56 crc kubenswrapper[5049]: E0127 18:50:56.647273 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:51:08 crc kubenswrapper[5049]: I0127 18:51:08.646189 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:51:08 crc kubenswrapper[5049]: E0127 18:51:08.646994 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:51:23 crc kubenswrapper[5049]: I0127 18:51:23.646642 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:51:23 crc kubenswrapper[5049]: E0127 18:51:23.647778 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:51:34 crc kubenswrapper[5049]: I0127 18:51:34.646424 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:51:34 crc kubenswrapper[5049]: E0127 18:51:34.647179 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:51:48 crc kubenswrapper[5049]: I0127 18:51:48.645899 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:51:48 crc kubenswrapper[5049]: E0127 18:51:48.646599 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:52:00 crc kubenswrapper[5049]: I0127 18:52:00.645526 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:52:00 crc kubenswrapper[5049]: E0127 18:52:00.646360 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:52:14 crc kubenswrapper[5049]: I0127 18:52:14.647053 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:52:14 crc kubenswrapper[5049]: E0127 18:52:14.647881 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:52:29 crc kubenswrapper[5049]: I0127 18:52:29.646142 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:52:29 crc kubenswrapper[5049]: E0127 18:52:29.646859 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:52:40 crc kubenswrapper[5049]: I0127 18:52:40.645619 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:52:40 crc kubenswrapper[5049]: E0127 18:52:40.646314 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:52:52 crc kubenswrapper[5049]: I0127 18:52:52.647074 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:52:52 crc kubenswrapper[5049]: E0127 18:52:52.648029 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:53:06 crc kubenswrapper[5049]: I0127 18:53:06.646258 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:53:06 crc kubenswrapper[5049]: E0127 18:53:06.647333 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:53:20 crc kubenswrapper[5049]: I0127 18:53:20.646256 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:53:20 crc kubenswrapper[5049]: E0127 18:53:20.646916 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.774535 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sx5xl"] Jan 27 18:53:22 crc kubenswrapper[5049]: E0127 18:53:22.775201 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerName="registry-server" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.775214 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerName="registry-server" Jan 27 18:53:22 crc kubenswrapper[5049]: E0127 18:53:22.775229 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerName="extract-content" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.775234 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerName="extract-content" Jan 27 18:53:22 crc kubenswrapper[5049]: E0127 18:53:22.775269 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerName="extract-utilities" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.775275 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerName="extract-utilities" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.775454 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfcdbb6-9e41-4ee4-962d-253c6e79b659" containerName="registry-server" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.776834 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.794520 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx5xl"] Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.892772 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-catalog-content\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.892823 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2244d\" (UniqueName: \"kubernetes.io/projected/ad00eb16-a86d-4159-a839-47e80a32108f-kube-api-access-2244d\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.892853 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-utilities\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.994691 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2244d\" (UniqueName: \"kubernetes.io/projected/ad00eb16-a86d-4159-a839-47e80a32108f-kube-api-access-2244d\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.994738 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-catalog-content\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.994761 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-utilities\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.995356 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-catalog-content\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:22 crc kubenswrapper[5049]: I0127 18:53:22.995422 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-utilities\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:23 crc kubenswrapper[5049]: I0127 18:53:23.015495 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2244d\" (UniqueName: \"kubernetes.io/projected/ad00eb16-a86d-4159-a839-47e80a32108f-kube-api-access-2244d\") pod \"community-operators-sx5xl\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:23 crc kubenswrapper[5049]: I0127 18:53:23.119905 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:23 crc kubenswrapper[5049]: I0127 18:53:23.663982 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx5xl"] Jan 27 18:53:23 crc kubenswrapper[5049]: W0127 18:53:23.668159 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad00eb16_a86d_4159_a839_47e80a32108f.slice/crio-3a89bde62f8ce0172da92bc04266180b0ee70e23ff77c8239bcbbcafafbd75b7 WatchSource:0}: Error finding container 3a89bde62f8ce0172da92bc04266180b0ee70e23ff77c8239bcbbcafafbd75b7: Status 404 returned error can't find the container with id 3a89bde62f8ce0172da92bc04266180b0ee70e23ff77c8239bcbbcafafbd75b7 Jan 27 18:53:24 crc kubenswrapper[5049]: I0127 18:53:24.489566 5049 generic.go:334] "Generic (PLEG): container finished" podID="ad00eb16-a86d-4159-a839-47e80a32108f" containerID="75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3" exitCode=0 Jan 27 18:53:24 crc kubenswrapper[5049]: I0127 18:53:24.489665 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx5xl" event={"ID":"ad00eb16-a86d-4159-a839-47e80a32108f","Type":"ContainerDied","Data":"75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3"} Jan 27 18:53:24 crc kubenswrapper[5049]: I0127 18:53:24.489742 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx5xl" event={"ID":"ad00eb16-a86d-4159-a839-47e80a32108f","Type":"ContainerStarted","Data":"3a89bde62f8ce0172da92bc04266180b0ee70e23ff77c8239bcbbcafafbd75b7"} Jan 27 18:53:24 crc kubenswrapper[5049]: I0127 18:53:24.491573 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 18:53:25 crc kubenswrapper[5049]: I0127 18:53:25.499343 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx5xl" event={"ID":"ad00eb16-a86d-4159-a839-47e80a32108f","Type":"ContainerStarted","Data":"c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6"} Jan 27 18:53:26 crc kubenswrapper[5049]: I0127 18:53:26.517415 5049 generic.go:334] "Generic (PLEG): container finished" podID="ad00eb16-a86d-4159-a839-47e80a32108f" containerID="c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6" exitCode=0 Jan 27 18:53:26 crc kubenswrapper[5049]: I0127 18:53:26.517472 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx5xl" event={"ID":"ad00eb16-a86d-4159-a839-47e80a32108f","Type":"ContainerDied","Data":"c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6"} Jan 27 18:53:27 crc kubenswrapper[5049]: I0127 18:53:27.528592 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx5xl" event={"ID":"ad00eb16-a86d-4159-a839-47e80a32108f","Type":"ContainerStarted","Data":"45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c"} Jan 27 18:53:27 crc kubenswrapper[5049]: I0127 18:53:27.549222 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sx5xl" podStartSLOduration=3.106747662 podStartE2EDuration="5.549198515s" podCreationTimestamp="2026-01-27 18:53:22 +0000 UTC" firstStartedPulling="2026-01-27 18:53:24.491383497 +0000 UTC m=+6979.590357046" lastFinishedPulling="2026-01-27 18:53:26.93383434 +0000 UTC m=+6982.032807899" observedRunningTime="2026-01-27 18:53:27.547727243 +0000 UTC m=+6982.646700812" watchObservedRunningTime="2026-01-27 18:53:27.549198515 +0000 UTC m=+6982.648172064" Jan 27 18:53:33 crc kubenswrapper[5049]: I0127 18:53:33.120609 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:33 crc kubenswrapper[5049]: I0127 18:53:33.121278 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:33 crc kubenswrapper[5049]: I0127 18:53:33.165943 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:33 crc kubenswrapper[5049]: I0127 18:53:33.667711 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:33 crc kubenswrapper[5049]: I0127 18:53:33.712785 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx5xl"] Jan 27 18:53:34 crc kubenswrapper[5049]: I0127 18:53:34.647399 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:53:34 crc kubenswrapper[5049]: E0127 18:53:34.647839 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:53:35 crc kubenswrapper[5049]: I0127 18:53:35.595324 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sx5xl" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" containerName="registry-server" containerID="cri-o://45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c" gracePeriod=2 Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.061808 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.165240 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-utilities\") pod \"ad00eb16-a86d-4159-a839-47e80a32108f\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.165420 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-catalog-content\") pod \"ad00eb16-a86d-4159-a839-47e80a32108f\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.165631 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2244d\" (UniqueName: \"kubernetes.io/projected/ad00eb16-a86d-4159-a839-47e80a32108f-kube-api-access-2244d\") pod \"ad00eb16-a86d-4159-a839-47e80a32108f\" (UID: \"ad00eb16-a86d-4159-a839-47e80a32108f\") " Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.167437 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-utilities" (OuterVolumeSpecName: "utilities") pod "ad00eb16-a86d-4159-a839-47e80a32108f" (UID: "ad00eb16-a86d-4159-a839-47e80a32108f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.172371 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad00eb16-a86d-4159-a839-47e80a32108f-kube-api-access-2244d" (OuterVolumeSpecName: "kube-api-access-2244d") pod "ad00eb16-a86d-4159-a839-47e80a32108f" (UID: "ad00eb16-a86d-4159-a839-47e80a32108f"). InnerVolumeSpecName "kube-api-access-2244d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.268486 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2244d\" (UniqueName: \"kubernetes.io/projected/ad00eb16-a86d-4159-a839-47e80a32108f-kube-api-access-2244d\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.268531 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.609979 5049 generic.go:334] "Generic (PLEG): container finished" podID="ad00eb16-a86d-4159-a839-47e80a32108f" containerID="45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c" exitCode=0 Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.610045 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx5xl" event={"ID":"ad00eb16-a86d-4159-a839-47e80a32108f","Type":"ContainerDied","Data":"45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c"} Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.610059 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx5xl" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.610097 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx5xl" event={"ID":"ad00eb16-a86d-4159-a839-47e80a32108f","Type":"ContainerDied","Data":"3a89bde62f8ce0172da92bc04266180b0ee70e23ff77c8239bcbbcafafbd75b7"} Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.610134 5049 scope.go:117] "RemoveContainer" containerID="45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.629931 5049 scope.go:117] "RemoveContainer" containerID="c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.665963 5049 scope.go:117] "RemoveContainer" containerID="75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.694218 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad00eb16-a86d-4159-a839-47e80a32108f" (UID: "ad00eb16-a86d-4159-a839-47e80a32108f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.700107 5049 scope.go:117] "RemoveContainer" containerID="45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c" Jan 27 18:53:36 crc kubenswrapper[5049]: E0127 18:53:36.700747 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c\": container with ID starting with 45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c not found: ID does not exist" containerID="45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.700784 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c"} err="failed to get container status \"45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c\": rpc error: code = NotFound desc = could not find container \"45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c\": container with ID starting with 45c82eb352ef4afbae29eeaeeabc0fa8be391b900edfbb2760d87150c843235c not found: ID does not exist" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.700816 5049 scope.go:117] "RemoveContainer" containerID="c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6" Jan 27 18:53:36 crc kubenswrapper[5049]: E0127 18:53:36.701147 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6\": container with ID starting with c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6 not found: ID does not exist" containerID="c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.701170 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6"} err="failed to get container status \"c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6\": rpc error: code = NotFound desc = could not find container \"c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6\": container with ID starting with c10932e2b94ef2b10fa50f0dd614b531c7c4dde8ce69950fb6ff214042864ad6 not found: ID does not exist" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.701182 5049 scope.go:117] "RemoveContainer" containerID="75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3" Jan 27 18:53:36 crc kubenswrapper[5049]: E0127 18:53:36.701751 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3\": container with ID starting with 75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3 not found: ID does not exist" containerID="75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.701801 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3"} err="failed to get container status \"75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3\": rpc error: code = NotFound desc = could not find container \"75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3\": container with ID starting with 75fa475b958706ff8e43b8958f84a26d1282a324b234eb7c0de3094b302ecba3 not found: ID does not exist" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.778991 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad00eb16-a86d-4159-a839-47e80a32108f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.952792 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx5xl"] Jan 27 18:53:36 crc kubenswrapper[5049]: I0127 18:53:36.962294 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sx5xl"] Jan 27 18:53:37 crc kubenswrapper[5049]: I0127 18:53:37.657403 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" path="/var/lib/kubelet/pods/ad00eb16-a86d-4159-a839-47e80a32108f/volumes" Jan 27 18:53:45 crc kubenswrapper[5049]: I0127 18:53:45.661717 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:53:45 crc kubenswrapper[5049]: E0127 18:53:45.662479 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:53:57 crc kubenswrapper[5049]: I0127 18:53:57.112616 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:53:57 crc kubenswrapper[5049]: E0127 18:53:57.113278 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:54:11 crc kubenswrapper[5049]: I0127 18:54:11.647006 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:54:11 crc kubenswrapper[5049]: E0127 18:54:11.647784 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 18:54:24 crc kubenswrapper[5049]: I0127 18:54:24.645924 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:54:25 crc kubenswrapper[5049]: I0127 18:54:25.390849 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"333646c2b9d71633468e9581025fbccd333a4e979b2f52b40e5f7096445c71ce"} Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.824993 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t47nv"] Jan 27 18:55:14 crc kubenswrapper[5049]: E0127 18:55:14.825848 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" containerName="registry-server" Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.825861 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" containerName="registry-server" Jan 27 18:55:14 crc kubenswrapper[5049]: E0127 18:55:14.825880 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" containerName="extract-content" Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.825886 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" containerName="extract-content" Jan 27 18:55:14 crc kubenswrapper[5049]: E0127 18:55:14.825913 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" containerName="extract-utilities" Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.825920 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" containerName="extract-utilities" Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.826113 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad00eb16-a86d-4159-a839-47e80a32108f" containerName="registry-server" Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.827638 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.839559 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t47nv"] Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.963191 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqzgx\" (UniqueName: \"kubernetes.io/projected/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-kube-api-access-tqzgx\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.963256 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-catalog-content\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:14 crc kubenswrapper[5049]: I0127 18:55:14.963402 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-utilities\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.065445 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqzgx\" (UniqueName: \"kubernetes.io/projected/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-kube-api-access-tqzgx\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.065518 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-catalog-content\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.065612 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-utilities\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.066080 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-catalog-content\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.066157 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-utilities\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.086253 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqzgx\" (UniqueName: \"kubernetes.io/projected/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-kube-api-access-tqzgx\") pod \"certified-operators-t47nv\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.153954 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.700140 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t47nv"] Jan 27 18:55:15 crc kubenswrapper[5049]: I0127 18:55:15.815431 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t47nv" event={"ID":"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a","Type":"ContainerStarted","Data":"7c27d6244b040c27118832c13a5f98032e43b1a0666ef2d26e102ca2792a08e9"} Jan 27 18:55:16 crc kubenswrapper[5049]: I0127 18:55:16.823804 5049 generic.go:334] "Generic (PLEG): container finished" podID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerID="bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6" exitCode=0 Jan 27 18:55:16 crc kubenswrapper[5049]: I0127 18:55:16.823874 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t47nv" event={"ID":"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a","Type":"ContainerDied","Data":"bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6"} Jan 27 18:55:17 crc kubenswrapper[5049]: I0127 18:55:17.833530 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t47nv" event={"ID":"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a","Type":"ContainerStarted","Data":"f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab"} Jan 27 18:55:18 crc kubenswrapper[5049]: I0127 18:55:18.842222 5049 generic.go:334] "Generic (PLEG): container finished" podID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerID="f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab" exitCode=0 Jan 27 18:55:18 crc kubenswrapper[5049]: I0127 18:55:18.842301 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t47nv" event={"ID":"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a","Type":"ContainerDied","Data":"f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab"} Jan 27 18:55:19 crc kubenswrapper[5049]: I0127 18:55:19.851853 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t47nv" event={"ID":"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a","Type":"ContainerStarted","Data":"e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6"} Jan 27 18:55:19 crc kubenswrapper[5049]: I0127 18:55:19.878708 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t47nv" podStartSLOduration=3.130073488 podStartE2EDuration="5.878690076s" podCreationTimestamp="2026-01-27 18:55:14 +0000 UTC" firstStartedPulling="2026-01-27 18:55:16.826048119 +0000 UTC m=+7091.925021668" lastFinishedPulling="2026-01-27 18:55:19.574664707 +0000 UTC m=+7094.673638256" observedRunningTime="2026-01-27 18:55:19.873047367 +0000 UTC m=+7094.972020916" watchObservedRunningTime="2026-01-27 18:55:19.878690076 +0000 UTC m=+7094.977663625" Jan 27 18:55:25 crc kubenswrapper[5049]: I0127 18:55:25.154601 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:25 crc kubenswrapper[5049]: I0127 18:55:25.155208 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:25 crc kubenswrapper[5049]: I0127 18:55:25.199637 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:25 crc kubenswrapper[5049]: I0127 18:55:25.970290 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:26 crc kubenswrapper[5049]: I0127 18:55:26.027267 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t47nv"] Jan 27 18:55:27 crc kubenswrapper[5049]: I0127 18:55:27.917486 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t47nv" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerName="registry-server" containerID="cri-o://e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6" gracePeriod=2 Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.517036 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.649955 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-catalog-content\") pod \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.650023 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-utilities\") pod \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.650099 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqzgx\" (UniqueName: \"kubernetes.io/projected/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-kube-api-access-tqzgx\") pod \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\" (UID: \"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a\") " Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.651520 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-utilities" (OuterVolumeSpecName: "utilities") pod "daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" (UID: "daad9450-1fbc-4a0d-b7da-a3abab2f7b9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.659870 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-kube-api-access-tqzgx" (OuterVolumeSpecName: "kube-api-access-tqzgx") pod "daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" (UID: "daad9450-1fbc-4a0d-b7da-a3abab2f7b9a"). InnerVolumeSpecName "kube-api-access-tqzgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.721069 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" (UID: "daad9450-1fbc-4a0d-b7da-a3abab2f7b9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.756211 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.756252 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.756264 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqzgx\" (UniqueName: \"kubernetes.io/projected/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a-kube-api-access-tqzgx\") on node \"crc\" DevicePath \"\"" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.928445 5049 generic.go:334] "Generic (PLEG): container finished" podID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerID="e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6" exitCode=0 Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.928499 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t47nv" event={"ID":"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a","Type":"ContainerDied","Data":"e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6"} Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.928532 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t47nv" event={"ID":"daad9450-1fbc-4a0d-b7da-a3abab2f7b9a","Type":"ContainerDied","Data":"7c27d6244b040c27118832c13a5f98032e43b1a0666ef2d26e102ca2792a08e9"} Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.928526 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t47nv" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.928547 5049 scope.go:117] "RemoveContainer" containerID="e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.968164 5049 scope.go:117] "RemoveContainer" containerID="f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab" Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.996560 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t47nv"] Jan 27 18:55:28 crc kubenswrapper[5049]: I0127 18:55:28.996726 5049 scope.go:117] "RemoveContainer" containerID="bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6" Jan 27 18:55:29 crc kubenswrapper[5049]: I0127 18:55:29.005930 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t47nv"] Jan 27 18:55:29 crc kubenswrapper[5049]: I0127 18:55:29.042970 5049 scope.go:117] "RemoveContainer" containerID="e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6" Jan 27 18:55:29 crc kubenswrapper[5049]: E0127 18:55:29.043468 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6\": container with ID starting with e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6 not found: ID does not exist" containerID="e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6" Jan 27 18:55:29 crc kubenswrapper[5049]: I0127 18:55:29.043510 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6"} err="failed to get container status \"e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6\": rpc error: code = NotFound desc = could not find container \"e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6\": container with ID starting with e0bdcd4291a311950cf4790d146d32248f340f4eeca1b1d33a3699d2862671d6 not found: ID does not exist" Jan 27 18:55:29 crc kubenswrapper[5049]: I0127 18:55:29.043544 5049 scope.go:117] "RemoveContainer" containerID="f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab" Jan 27 18:55:29 crc kubenswrapper[5049]: E0127 18:55:29.043949 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab\": container with ID starting with f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab not found: ID does not exist" containerID="f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab" Jan 27 18:55:29 crc kubenswrapper[5049]: I0127 18:55:29.043966 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab"} err="failed to get container status \"f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab\": rpc error: code = NotFound desc = could not find container \"f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab\": container with ID starting with f2bafac7ce063a0b28182e8bebdcb4bd7e42b579b266a4d85a078339f9948bab not found: ID does not exist" Jan 27 18:55:29 crc kubenswrapper[5049]: I0127 18:55:29.043978 5049 scope.go:117] "RemoveContainer" containerID="bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6" Jan 27 18:55:29 crc kubenswrapper[5049]: E0127 18:55:29.044255 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6\": container with ID starting with bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6 not found: ID does not exist" containerID="bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6" Jan 27 18:55:29 crc kubenswrapper[5049]: I0127 18:55:29.044274 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6"} err="failed to get container status \"bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6\": rpc error: code = NotFound desc = could not find container \"bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6\": container with ID starting with bd860fad31457118f3d0fc0bba4785cf3aa1bfbfc60e0af78b6d054b55538da6 not found: ID does not exist" Jan 27 18:55:29 crc kubenswrapper[5049]: I0127 18:55:29.659873 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" path="/var/lib/kubelet/pods/daad9450-1fbc-4a0d-b7da-a3abab2f7b9a/volumes" Jan 27 18:56:47 crc kubenswrapper[5049]: I0127 18:56:47.781717 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:56:47 crc kubenswrapper[5049]: I0127 18:56:47.782459 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:57:17 crc kubenswrapper[5049]: I0127 18:57:17.781304 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:57:17 crc kubenswrapper[5049]: I0127 18:57:17.781856 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:57:47 crc kubenswrapper[5049]: I0127 18:57:47.781195 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 18:57:47 crc kubenswrapper[5049]: I0127 18:57:47.781820 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 18:57:47 crc kubenswrapper[5049]: I0127 18:57:47.781872 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 18:57:47 crc kubenswrapper[5049]: I0127 18:57:47.782624 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"333646c2b9d71633468e9581025fbccd333a4e979b2f52b40e5f7096445c71ce"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 18:57:47 crc kubenswrapper[5049]: I0127 18:57:47.782698 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://333646c2b9d71633468e9581025fbccd333a4e979b2f52b40e5f7096445c71ce" gracePeriod=600 Jan 27 18:57:48 crc kubenswrapper[5049]: I0127 18:57:48.175423 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="333646c2b9d71633468e9581025fbccd333a4e979b2f52b40e5f7096445c71ce" exitCode=0 Jan 27 18:57:48 crc kubenswrapper[5049]: I0127 18:57:48.175522 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"333646c2b9d71633468e9581025fbccd333a4e979b2f52b40e5f7096445c71ce"} Jan 27 18:57:48 crc kubenswrapper[5049]: I0127 18:57:48.175799 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a"} Jan 27 18:57:48 crc kubenswrapper[5049]: I0127 18:57:48.175833 5049 scope.go:117] "RemoveContainer" containerID="da8cb0a08349d756a0171c266954879a600e4ce0a0264ea46eeea5beace3499b" Jan 27 18:58:01 crc kubenswrapper[5049]: I0127 18:58:01.920280 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-656sz"] Jan 27 18:58:01 crc kubenswrapper[5049]: E0127 18:58:01.921737 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerName="extract-content" Jan 27 18:58:01 crc kubenswrapper[5049]: I0127 18:58:01.921753 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerName="extract-content" Jan 27 18:58:01 crc kubenswrapper[5049]: E0127 18:58:01.921768 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerName="registry-server" Jan 27 18:58:01 crc kubenswrapper[5049]: I0127 18:58:01.921776 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerName="registry-server" Jan 27 18:58:01 crc kubenswrapper[5049]: E0127 18:58:01.921820 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerName="extract-utilities" Jan 27 18:58:01 crc kubenswrapper[5049]: I0127 18:58:01.921828 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerName="extract-utilities" Jan 27 18:58:01 crc kubenswrapper[5049]: I0127 18:58:01.922052 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="daad9450-1fbc-4a0d-b7da-a3abab2f7b9a" containerName="registry-server" Jan 27 18:58:01 crc kubenswrapper[5049]: I0127 18:58:01.923794 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:01 crc kubenswrapper[5049]: I0127 18:58:01.932345 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-656sz"] Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.016634 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29mbh\" (UniqueName: \"kubernetes.io/projected/e1eb61c6-bf75-4501-aa82-533ff4590ee3-kube-api-access-29mbh\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.016716 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-catalog-content\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.016773 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-utilities\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.118459 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29mbh\" (UniqueName: \"kubernetes.io/projected/e1eb61c6-bf75-4501-aa82-533ff4590ee3-kube-api-access-29mbh\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.118518 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-catalog-content\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.118558 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-utilities\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.118967 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-catalog-content\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.119027 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-utilities\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.140102 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29mbh\" (UniqueName: \"kubernetes.io/projected/e1eb61c6-bf75-4501-aa82-533ff4590ee3-kube-api-access-29mbh\") pod \"redhat-marketplace-656sz\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.254686 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:02 crc kubenswrapper[5049]: I0127 18:58:02.724098 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-656sz"] Jan 27 18:58:03 crc kubenswrapper[5049]: I0127 18:58:03.322231 5049 generic.go:334] "Generic (PLEG): container finished" podID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerID="c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301" exitCode=0 Jan 27 18:58:03 crc kubenswrapper[5049]: I0127 18:58:03.322648 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-656sz" event={"ID":"e1eb61c6-bf75-4501-aa82-533ff4590ee3","Type":"ContainerDied","Data":"c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301"} Jan 27 18:58:03 crc kubenswrapper[5049]: I0127 18:58:03.322748 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-656sz" event={"ID":"e1eb61c6-bf75-4501-aa82-533ff4590ee3","Type":"ContainerStarted","Data":"9d1e70f45ba6d35a8464eb8d79d97bd0f0aa6916db8db6a799e0a9784d129ce3"} Jan 27 18:58:04 crc kubenswrapper[5049]: I0127 18:58:04.335447 5049 generic.go:334] "Generic (PLEG): container finished" podID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerID="76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809" exitCode=0 Jan 27 18:58:04 crc kubenswrapper[5049]: I0127 18:58:04.335548 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-656sz" event={"ID":"e1eb61c6-bf75-4501-aa82-533ff4590ee3","Type":"ContainerDied","Data":"76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809"} Jan 27 18:58:06 crc kubenswrapper[5049]: I0127 18:58:06.361896 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-656sz" event={"ID":"e1eb61c6-bf75-4501-aa82-533ff4590ee3","Type":"ContainerStarted","Data":"5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853"} Jan 27 18:58:06 crc kubenswrapper[5049]: I0127 18:58:06.418024 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-656sz" podStartSLOduration=3.371867411 podStartE2EDuration="5.418001782s" podCreationTimestamp="2026-01-27 18:58:01 +0000 UTC" firstStartedPulling="2026-01-27 18:58:03.327626987 +0000 UTC m=+7258.426600536" lastFinishedPulling="2026-01-27 18:58:05.373761358 +0000 UTC m=+7260.472734907" observedRunningTime="2026-01-27 18:58:06.41548553 +0000 UTC m=+7261.514459079" watchObservedRunningTime="2026-01-27 18:58:06.418001782 +0000 UTC m=+7261.516975351" Jan 27 18:58:12 crc kubenswrapper[5049]: I0127 18:58:12.255201 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:12 crc kubenswrapper[5049]: I0127 18:58:12.255627 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:12 crc kubenswrapper[5049]: I0127 18:58:12.302392 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:12 crc kubenswrapper[5049]: I0127 18:58:12.465110 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:12 crc kubenswrapper[5049]: I0127 18:58:12.541130 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-656sz"] Jan 27 18:58:14 crc kubenswrapper[5049]: I0127 18:58:14.426979 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-656sz" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerName="registry-server" containerID="cri-o://5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853" gracePeriod=2 Jan 27 18:58:14 crc kubenswrapper[5049]: I0127 18:58:14.893889 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:14 crc kubenswrapper[5049]: I0127 18:58:14.998846 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29mbh\" (UniqueName: \"kubernetes.io/projected/e1eb61c6-bf75-4501-aa82-533ff4590ee3-kube-api-access-29mbh\") pod \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " Jan 27 18:58:14 crc kubenswrapper[5049]: I0127 18:58:14.999155 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-utilities\") pod \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " Jan 27 18:58:14 crc kubenswrapper[5049]: I0127 18:58:14.999219 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-catalog-content\") pod \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\" (UID: \"e1eb61c6-bf75-4501-aa82-533ff4590ee3\") " Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.000090 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-utilities" (OuterVolumeSpecName: "utilities") pod "e1eb61c6-bf75-4501-aa82-533ff4590ee3" (UID: "e1eb61c6-bf75-4501-aa82-533ff4590ee3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.012012 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1eb61c6-bf75-4501-aa82-533ff4590ee3-kube-api-access-29mbh" (OuterVolumeSpecName: "kube-api-access-29mbh") pod "e1eb61c6-bf75-4501-aa82-533ff4590ee3" (UID: "e1eb61c6-bf75-4501-aa82-533ff4590ee3"). InnerVolumeSpecName "kube-api-access-29mbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.020502 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1eb61c6-bf75-4501-aa82-533ff4590ee3" (UID: "e1eb61c6-bf75-4501-aa82-533ff4590ee3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.100923 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29mbh\" (UniqueName: \"kubernetes.io/projected/e1eb61c6-bf75-4501-aa82-533ff4590ee3-kube-api-access-29mbh\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.101156 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.101167 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1eb61c6-bf75-4501-aa82-533ff4590ee3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.439360 5049 generic.go:334] "Generic (PLEG): container finished" podID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerID="5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853" exitCode=0 Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.439416 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-656sz" event={"ID":"e1eb61c6-bf75-4501-aa82-533ff4590ee3","Type":"ContainerDied","Data":"5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853"} Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.439455 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-656sz" event={"ID":"e1eb61c6-bf75-4501-aa82-533ff4590ee3","Type":"ContainerDied","Data":"9d1e70f45ba6d35a8464eb8d79d97bd0f0aa6916db8db6a799e0a9784d129ce3"} Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.439489 5049 scope.go:117] "RemoveContainer" containerID="5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.439496 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-656sz" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.476324 5049 scope.go:117] "RemoveContainer" containerID="76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.489763 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-656sz"] Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.499917 5049 scope.go:117] "RemoveContainer" containerID="c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.500195 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-656sz"] Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.558443 5049 scope.go:117] "RemoveContainer" containerID="5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853" Jan 27 18:58:15 crc kubenswrapper[5049]: E0127 18:58:15.559034 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853\": container with ID starting with 5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853 not found: ID does not exist" containerID="5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.559114 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853"} err="failed to get container status \"5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853\": rpc error: code = NotFound desc = could not find container \"5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853\": container with ID starting with 5a2d85691b723745b4e9d2b4637769a88cc815ac909192f60f75cbc8e7a95853 not found: ID does not exist" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.559148 5049 scope.go:117] "RemoveContainer" containerID="76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809" Jan 27 18:58:15 crc kubenswrapper[5049]: E0127 18:58:15.559495 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809\": container with ID starting with 76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809 not found: ID does not exist" containerID="76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.559525 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809"} err="failed to get container status \"76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809\": rpc error: code = NotFound desc = could not find container \"76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809\": container with ID starting with 76d7cb56fb2e7661a3c6fe88eb0b3283d94268a6476d968ec3956485290aa809 not found: ID does not exist" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.559539 5049 scope.go:117] "RemoveContainer" containerID="c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301" Jan 27 18:58:15 crc kubenswrapper[5049]: E0127 18:58:15.559918 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301\": container with ID starting with c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301 not found: ID does not exist" containerID="c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.559949 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301"} err="failed to get container status \"c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301\": rpc error: code = NotFound desc = could not find container \"c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301\": container with ID starting with c10889eda3968549ba2e51eef7488a9f76759f498267219956af0d9de9b9c301 not found: ID does not exist" Jan 27 18:58:15 crc kubenswrapper[5049]: I0127 18:58:15.667104 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" path="/var/lib/kubelet/pods/e1eb61c6-bf75-4501-aa82-533ff4590ee3/volumes" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.179422 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt"] Jan 27 19:00:00 crc kubenswrapper[5049]: E0127 19:00:00.180571 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerName="extract-utilities" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.180596 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerName="extract-utilities" Jan 27 19:00:00 crc kubenswrapper[5049]: E0127 19:00:00.180622 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.180631 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[5049]: E0127 19:00:00.180663 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerName="extract-content" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.180697 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerName="extract-content" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.180955 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1eb61c6-bf75-4501-aa82-533ff4590ee3" containerName="registry-server" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.181883 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.186376 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.186479 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.192472 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt"] Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.310187 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-config-volume\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.310229 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-secret-volume\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.310339 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7whq\" (UniqueName: \"kubernetes.io/projected/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-kube-api-access-w7whq\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.412736 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7whq\" (UniqueName: \"kubernetes.io/projected/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-kube-api-access-w7whq\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.412914 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-config-volume\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.412933 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-secret-volume\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.414008 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-config-volume\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.420443 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-secret-volume\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.427992 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7whq\" (UniqueName: \"kubernetes.io/projected/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-kube-api-access-w7whq\") pod \"collect-profiles-29492340-wvsqt\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:00 crc kubenswrapper[5049]: I0127 19:00:00.515089 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:01 crc kubenswrapper[5049]: I0127 19:00:01.007814 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt"] Jan 27 19:00:01 crc kubenswrapper[5049]: I0127 19:00:01.401038 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" event={"ID":"199c03ed-e9ab-4cff-b0e8-0374c1b6462e","Type":"ContainerStarted","Data":"72f0c4f8283bf3abd73edc031ce71b46ec841fe35127d68eb9f7ae5989a914e6"} Jan 27 19:00:01 crc kubenswrapper[5049]: I0127 19:00:01.401084 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" event={"ID":"199c03ed-e9ab-4cff-b0e8-0374c1b6462e","Type":"ContainerStarted","Data":"072db168b71f58f0af7ac4eca02c5efe08eec49e680cf450b77fddc4010df492"} Jan 27 19:00:02 crc kubenswrapper[5049]: I0127 19:00:02.415263 5049 generic.go:334] "Generic (PLEG): container finished" podID="199c03ed-e9ab-4cff-b0e8-0374c1b6462e" containerID="72f0c4f8283bf3abd73edc031ce71b46ec841fe35127d68eb9f7ae5989a914e6" exitCode=0 Jan 27 19:00:02 crc kubenswrapper[5049]: I0127 19:00:02.415593 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" event={"ID":"199c03ed-e9ab-4cff-b0e8-0374c1b6462e","Type":"ContainerDied","Data":"72f0c4f8283bf3abd73edc031ce71b46ec841fe35127d68eb9f7ae5989a914e6"} Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.802455 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.884610 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-secret-volume\") pod \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.884704 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7whq\" (UniqueName: \"kubernetes.io/projected/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-kube-api-access-w7whq\") pod \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.884758 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-config-volume\") pod \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\" (UID: \"199c03ed-e9ab-4cff-b0e8-0374c1b6462e\") " Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.885990 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-config-volume" (OuterVolumeSpecName: "config-volume") pod "199c03ed-e9ab-4cff-b0e8-0374c1b6462e" (UID: "199c03ed-e9ab-4cff-b0e8-0374c1b6462e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.890483 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "199c03ed-e9ab-4cff-b0e8-0374c1b6462e" (UID: "199c03ed-e9ab-4cff-b0e8-0374c1b6462e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.890974 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-kube-api-access-w7whq" (OuterVolumeSpecName: "kube-api-access-w7whq") pod "199c03ed-e9ab-4cff-b0e8-0374c1b6462e" (UID: "199c03ed-e9ab-4cff-b0e8-0374c1b6462e"). InnerVolumeSpecName "kube-api-access-w7whq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.987075 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.987121 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7whq\" (UniqueName: \"kubernetes.io/projected/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-kube-api-access-w7whq\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:03 crc kubenswrapper[5049]: I0127 19:00:03.987133 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/199c03ed-e9ab-4cff-b0e8-0374c1b6462e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:00:04 crc kubenswrapper[5049]: I0127 19:00:04.431762 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" event={"ID":"199c03ed-e9ab-4cff-b0e8-0374c1b6462e","Type":"ContainerDied","Data":"072db168b71f58f0af7ac4eca02c5efe08eec49e680cf450b77fddc4010df492"} Jan 27 19:00:04 crc kubenswrapper[5049]: I0127 19:00:04.432069 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="072db168b71f58f0af7ac4eca02c5efe08eec49e680cf450b77fddc4010df492" Jan 27 19:00:04 crc kubenswrapper[5049]: I0127 19:00:04.432122 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt" Jan 27 19:00:04 crc kubenswrapper[5049]: I0127 19:00:04.879076 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs"] Jan 27 19:00:04 crc kubenswrapper[5049]: I0127 19:00:04.886060 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492295-swqqs"] Jan 27 19:00:05 crc kubenswrapper[5049]: I0127 19:00:05.658365 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="640d350d-23ee-47b5-bdd1-404f1d26226e" path="/var/lib/kubelet/pods/640d350d-23ee-47b5-bdd1-404f1d26226e/volumes" Jan 27 19:00:17 crc kubenswrapper[5049]: I0127 19:00:17.781861 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:00:17 crc kubenswrapper[5049]: I0127 19:00:17.782498 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:00:20 crc kubenswrapper[5049]: I0127 19:00:20.230848 5049 scope.go:117] "RemoveContainer" containerID="acb8dc8e60eaf75f70e413e3cf1bb75d70fa7c496d0859e16c2969c8d456fb0f" Jan 27 19:00:47 crc kubenswrapper[5049]: I0127 19:00:47.781075 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:00:47 crc kubenswrapper[5049]: I0127 19:00:47.781894 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.162690 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492341-cb5bv"] Jan 27 19:01:00 crc kubenswrapper[5049]: E0127 19:01:00.165352 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199c03ed-e9ab-4cff-b0e8-0374c1b6462e" containerName="collect-profiles" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.165443 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="199c03ed-e9ab-4cff-b0e8-0374c1b6462e" containerName="collect-profiles" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.165732 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="199c03ed-e9ab-4cff-b0e8-0374c1b6462e" containerName="collect-profiles" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.166444 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.177957 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492341-cb5bv"] Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.282088 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-fernet-keys\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.282145 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdwnf\" (UniqueName: \"kubernetes.io/projected/40aa2585-6ba9-44c4-9511-816f01c80de1-kube-api-access-rdwnf\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.282289 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-combined-ca-bundle\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.282937 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-config-data\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.385436 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-fernet-keys\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.385544 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwnf\" (UniqueName: \"kubernetes.io/projected/40aa2585-6ba9-44c4-9511-816f01c80de1-kube-api-access-rdwnf\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.385641 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-combined-ca-bundle\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.385768 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-config-data\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.392293 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-fernet-keys\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.392958 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-config-data\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.393757 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-combined-ca-bundle\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.406487 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwnf\" (UniqueName: \"kubernetes.io/projected/40aa2585-6ba9-44c4-9511-816f01c80de1-kube-api-access-rdwnf\") pod \"keystone-cron-29492341-cb5bv\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.507754 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:00 crc kubenswrapper[5049]: I0127 19:01:00.965501 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492341-cb5bv"] Jan 27 19:01:01 crc kubenswrapper[5049]: I0127 19:01:01.361803 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-cb5bv" event={"ID":"40aa2585-6ba9-44c4-9511-816f01c80de1","Type":"ContainerStarted","Data":"6b8e2ea862ee95d547d28553664a1bf8c91903ec5b1e8d040e7008184efbf40c"} Jan 27 19:01:01 crc kubenswrapper[5049]: I0127 19:01:01.363273 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-cb5bv" event={"ID":"40aa2585-6ba9-44c4-9511-816f01c80de1","Type":"ContainerStarted","Data":"34466a0918cff90f89ea5aa87a0beee3ccc7023ec5333973d72fa3b3488628b4"} Jan 27 19:01:01 crc kubenswrapper[5049]: I0127 19:01:01.389308 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492341-cb5bv" podStartSLOduration=1.389288556 podStartE2EDuration="1.389288556s" podCreationTimestamp="2026-01-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 19:01:01.387333521 +0000 UTC m=+7436.486307150" watchObservedRunningTime="2026-01-27 19:01:01.389288556 +0000 UTC m=+7436.488262115" Jan 27 19:01:04 crc kubenswrapper[5049]: I0127 19:01:04.391114 5049 generic.go:334] "Generic (PLEG): container finished" podID="40aa2585-6ba9-44c4-9511-816f01c80de1" containerID="6b8e2ea862ee95d547d28553664a1bf8c91903ec5b1e8d040e7008184efbf40c" exitCode=0 Jan 27 19:01:04 crc kubenswrapper[5049]: I0127 19:01:04.391204 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-cb5bv" event={"ID":"40aa2585-6ba9-44c4-9511-816f01c80de1","Type":"ContainerDied","Data":"6b8e2ea862ee95d547d28553664a1bf8c91903ec5b1e8d040e7008184efbf40c"} Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.799818 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.927310 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-fernet-keys\") pod \"40aa2585-6ba9-44c4-9511-816f01c80de1\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.927418 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-config-data\") pod \"40aa2585-6ba9-44c4-9511-816f01c80de1\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.927459 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdwnf\" (UniqueName: \"kubernetes.io/projected/40aa2585-6ba9-44c4-9511-816f01c80de1-kube-api-access-rdwnf\") pod \"40aa2585-6ba9-44c4-9511-816f01c80de1\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.927507 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-combined-ca-bundle\") pod \"40aa2585-6ba9-44c4-9511-816f01c80de1\" (UID: \"40aa2585-6ba9-44c4-9511-816f01c80de1\") " Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.940701 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40aa2585-6ba9-44c4-9511-816f01c80de1-kube-api-access-rdwnf" (OuterVolumeSpecName: "kube-api-access-rdwnf") pod "40aa2585-6ba9-44c4-9511-816f01c80de1" (UID: "40aa2585-6ba9-44c4-9511-816f01c80de1"). InnerVolumeSpecName "kube-api-access-rdwnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.941475 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "40aa2585-6ba9-44c4-9511-816f01c80de1" (UID: "40aa2585-6ba9-44c4-9511-816f01c80de1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.956986 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40aa2585-6ba9-44c4-9511-816f01c80de1" (UID: "40aa2585-6ba9-44c4-9511-816f01c80de1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:05 crc kubenswrapper[5049]: I0127 19:01:05.979401 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-config-data" (OuterVolumeSpecName: "config-data") pod "40aa2585-6ba9-44c4-9511-816f01c80de1" (UID: "40aa2585-6ba9-44c4-9511-816f01c80de1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:01:06 crc kubenswrapper[5049]: I0127 19:01:06.030986 5049 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:06 crc kubenswrapper[5049]: I0127 19:01:06.031033 5049 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:06 crc kubenswrapper[5049]: I0127 19:01:06.031048 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdwnf\" (UniqueName: \"kubernetes.io/projected/40aa2585-6ba9-44c4-9511-816f01c80de1-kube-api-access-rdwnf\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:06 crc kubenswrapper[5049]: I0127 19:01:06.031067 5049 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40aa2585-6ba9-44c4-9511-816f01c80de1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 19:01:06 crc kubenswrapper[5049]: I0127 19:01:06.418358 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492341-cb5bv" event={"ID":"40aa2585-6ba9-44c4-9511-816f01c80de1","Type":"ContainerDied","Data":"34466a0918cff90f89ea5aa87a0beee3ccc7023ec5333973d72fa3b3488628b4"} Jan 27 19:01:06 crc kubenswrapper[5049]: I0127 19:01:06.418718 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34466a0918cff90f89ea5aa87a0beee3ccc7023ec5333973d72fa3b3488628b4" Jan 27 19:01:06 crc kubenswrapper[5049]: I0127 19:01:06.418459 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492341-cb5bv" Jan 27 19:01:17 crc kubenswrapper[5049]: I0127 19:01:17.781787 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:01:17 crc kubenswrapper[5049]: I0127 19:01:17.783237 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:01:17 crc kubenswrapper[5049]: I0127 19:01:17.783323 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:01:17 crc kubenswrapper[5049]: I0127 19:01:17.784331 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:01:17 crc kubenswrapper[5049]: I0127 19:01:17.784418 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" gracePeriod=600 Jan 27 19:01:17 crc kubenswrapper[5049]: E0127 19:01:17.934046 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:01:18 crc kubenswrapper[5049]: I0127 19:01:18.524559 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" exitCode=0 Jan 27 19:01:18 crc kubenswrapper[5049]: I0127 19:01:18.524616 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a"} Jan 27 19:01:18 crc kubenswrapper[5049]: I0127 19:01:18.524657 5049 scope.go:117] "RemoveContainer" containerID="333646c2b9d71633468e9581025fbccd333a4e979b2f52b40e5f7096445c71ce" Jan 27 19:01:18 crc kubenswrapper[5049]: I0127 19:01:18.525514 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:01:18 crc kubenswrapper[5049]: E0127 19:01:18.525925 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:01:31 crc kubenswrapper[5049]: I0127 19:01:31.646043 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:01:31 crc kubenswrapper[5049]: E0127 19:01:31.646881 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:01:45 crc kubenswrapper[5049]: I0127 19:01:45.652394 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:01:45 crc kubenswrapper[5049]: E0127 19:01:45.653361 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:01:58 crc kubenswrapper[5049]: I0127 19:01:58.646429 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:01:58 crc kubenswrapper[5049]: E0127 19:01:58.647232 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:02:10 crc kubenswrapper[5049]: I0127 19:02:10.646569 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:02:10 crc kubenswrapper[5049]: E0127 19:02:10.647446 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:02:21 crc kubenswrapper[5049]: I0127 19:02:21.646420 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:02:21 crc kubenswrapper[5049]: E0127 19:02:21.647445 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:02:34 crc kubenswrapper[5049]: I0127 19:02:34.646350 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:02:34 crc kubenswrapper[5049]: E0127 19:02:34.647833 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:02:47 crc kubenswrapper[5049]: I0127 19:02:47.646974 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:02:47 crc kubenswrapper[5049]: E0127 19:02:47.648002 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:03:00 crc kubenswrapper[5049]: I0127 19:03:00.645913 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:03:00 crc kubenswrapper[5049]: E0127 19:03:00.647387 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:03:15 crc kubenswrapper[5049]: I0127 19:03:15.663561 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:03:15 crc kubenswrapper[5049]: E0127 19:03:15.664622 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:03:26 crc kubenswrapper[5049]: I0127 19:03:26.646236 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:03:26 crc kubenswrapper[5049]: E0127 19:03:26.647362 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:03:37 crc kubenswrapper[5049]: I0127 19:03:37.646640 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:03:37 crc kubenswrapper[5049]: E0127 19:03:37.647532 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:03:51 crc kubenswrapper[5049]: I0127 19:03:51.646732 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:03:51 crc kubenswrapper[5049]: E0127 19:03:51.647668 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:04:06 crc kubenswrapper[5049]: I0127 19:04:06.646623 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:04:06 crc kubenswrapper[5049]: E0127 19:04:06.647478 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:04:17 crc kubenswrapper[5049]: I0127 19:04:17.646844 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:04:17 crc kubenswrapper[5049]: E0127 19:04:17.647619 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:04:30 crc kubenswrapper[5049]: I0127 19:04:30.646900 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:04:30 crc kubenswrapper[5049]: E0127 19:04:30.648063 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:04:44 crc kubenswrapper[5049]: I0127 19:04:44.645873 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:04:44 crc kubenswrapper[5049]: E0127 19:04:44.646656 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:04:59 crc kubenswrapper[5049]: I0127 19:04:59.646414 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:04:59 crc kubenswrapper[5049]: E0127 19:04:59.647120 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:05:13 crc kubenswrapper[5049]: I0127 19:05:13.646293 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:05:13 crc kubenswrapper[5049]: E0127 19:05:13.647087 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:05:26 crc kubenswrapper[5049]: I0127 19:05:26.646482 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:05:26 crc kubenswrapper[5049]: E0127 19:05:26.647469 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:05:37 crc kubenswrapper[5049]: I0127 19:05:37.646216 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:05:37 crc kubenswrapper[5049]: E0127 19:05:37.647364 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:05:50 crc kubenswrapper[5049]: I0127 19:05:50.646890 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:05:50 crc kubenswrapper[5049]: E0127 19:05:50.647779 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:06:03 crc kubenswrapper[5049]: I0127 19:06:03.646024 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:06:03 crc kubenswrapper[5049]: E0127 19:06:03.647044 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:06:16 crc kubenswrapper[5049]: I0127 19:06:16.646785 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:06:16 crc kubenswrapper[5049]: E0127 19:06:16.647655 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:06:28 crc kubenswrapper[5049]: I0127 19:06:28.648714 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:06:29 crc kubenswrapper[5049]: I0127 19:06:29.325881 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"fa623068b6bfd43b77a27803fad01500d85c86f42f3e20f99c90e696869ae1ac"} Jan 27 19:07:07 crc kubenswrapper[5049]: I0127 19:07:07.913709 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xkv4v"] Jan 27 19:07:07 crc kubenswrapper[5049]: E0127 19:07:07.915811 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40aa2585-6ba9-44c4-9511-816f01c80de1" containerName="keystone-cron" Jan 27 19:07:07 crc kubenswrapper[5049]: I0127 19:07:07.915922 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="40aa2585-6ba9-44c4-9511-816f01c80de1" containerName="keystone-cron" Jan 27 19:07:07 crc kubenswrapper[5049]: I0127 19:07:07.916223 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="40aa2585-6ba9-44c4-9511-816f01c80de1" containerName="keystone-cron" Jan 27 19:07:07 crc kubenswrapper[5049]: I0127 19:07:07.918120 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:07 crc kubenswrapper[5049]: I0127 19:07:07.928795 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkv4v"] Jan 27 19:07:07 crc kubenswrapper[5049]: I0127 19:07:07.969762 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-utilities\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:07 crc kubenswrapper[5049]: I0127 19:07:07.970045 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-catalog-content\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:07 crc kubenswrapper[5049]: I0127 19:07:07.970216 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6wf\" (UniqueName: \"kubernetes.io/projected/5a366ce2-40d3-40b0-a819-02dcebd0762c-kube-api-access-kr6wf\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.072107 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-catalog-content\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.072210 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr6wf\" (UniqueName: \"kubernetes.io/projected/5a366ce2-40d3-40b0-a819-02dcebd0762c-kube-api-access-kr6wf\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.072281 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-utilities\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.073123 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-utilities\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.073230 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-catalog-content\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.094267 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr6wf\" (UniqueName: \"kubernetes.io/projected/5a366ce2-40d3-40b0-a819-02dcebd0762c-kube-api-access-kr6wf\") pod \"certified-operators-xkv4v\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.116530 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nwgjl"] Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.118913 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.125656 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwgjl"] Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.179802 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-utilities\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.179982 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-756r4\" (UniqueName: \"kubernetes.io/projected/2778ca8a-f777-4571-9f19-e7c7992611d8-kube-api-access-756r4\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.180044 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-catalog-content\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.247689 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.281529 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-utilities\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.281629 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756r4\" (UniqueName: \"kubernetes.io/projected/2778ca8a-f777-4571-9f19-e7c7992611d8-kube-api-access-756r4\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.281664 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-catalog-content\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.282133 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-catalog-content\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.282611 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-utilities\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.300582 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756r4\" (UniqueName: \"kubernetes.io/projected/2778ca8a-f777-4571-9f19-e7c7992611d8-kube-api-access-756r4\") pod \"community-operators-nwgjl\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.483219 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:08 crc kubenswrapper[5049]: I0127 19:07:08.833992 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkv4v"] Jan 27 19:07:09 crc kubenswrapper[5049]: I0127 19:07:09.202687 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwgjl"] Jan 27 19:07:09 crc kubenswrapper[5049]: I0127 19:07:09.682104 5049 generic.go:334] "Generic (PLEG): container finished" podID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerID="de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047" exitCode=0 Jan 27 19:07:09 crc kubenswrapper[5049]: I0127 19:07:09.682385 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkv4v" event={"ID":"5a366ce2-40d3-40b0-a819-02dcebd0762c","Type":"ContainerDied","Data":"de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047"} Jan 27 19:07:09 crc kubenswrapper[5049]: I0127 19:07:09.682414 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkv4v" event={"ID":"5a366ce2-40d3-40b0-a819-02dcebd0762c","Type":"ContainerStarted","Data":"1b2a7b90a94555b9a0975f0005d8520bb5026492aef1dd02690766f2ec38fb2e"} Jan 27 19:07:09 crc kubenswrapper[5049]: I0127 19:07:09.687027 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:07:09 crc kubenswrapper[5049]: I0127 19:07:09.692817 5049 generic.go:334] "Generic (PLEG): container finished" podID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerID="6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165" exitCode=0 Jan 27 19:07:09 crc kubenswrapper[5049]: I0127 19:07:09.692877 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwgjl" event={"ID":"2778ca8a-f777-4571-9f19-e7c7992611d8","Type":"ContainerDied","Data":"6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165"} Jan 27 19:07:09 crc kubenswrapper[5049]: I0127 19:07:09.692901 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwgjl" event={"ID":"2778ca8a-f777-4571-9f19-e7c7992611d8","Type":"ContainerStarted","Data":"8e7e584cda6958caf5128b965cb884c4baee98b79a74b2a5cdaaa21a9259e053"} Jan 27 19:07:10 crc kubenswrapper[5049]: I0127 19:07:10.702420 5049 generic.go:334] "Generic (PLEG): container finished" podID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerID="97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4" exitCode=0 Jan 27 19:07:10 crc kubenswrapper[5049]: I0127 19:07:10.702501 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkv4v" event={"ID":"5a366ce2-40d3-40b0-a819-02dcebd0762c","Type":"ContainerDied","Data":"97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4"} Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.720036 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwgjl" event={"ID":"2778ca8a-f777-4571-9f19-e7c7992611d8","Type":"ContainerStarted","Data":"41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704"} Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.723932 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkv4v" event={"ID":"5a366ce2-40d3-40b0-a819-02dcebd0762c","Type":"ContainerStarted","Data":"c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971"} Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.723999 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-czm4r"] Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.729005 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.767616 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-czm4r"] Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.782452 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xkv4v" podStartSLOduration=3.384452783 podStartE2EDuration="4.782432572s" podCreationTimestamp="2026-01-27 19:07:07 +0000 UTC" firstStartedPulling="2026-01-27 19:07:09.686755095 +0000 UTC m=+7804.785728644" lastFinishedPulling="2026-01-27 19:07:11.084734884 +0000 UTC m=+7806.183708433" observedRunningTime="2026-01-27 19:07:11.773650649 +0000 UTC m=+7806.872624198" watchObservedRunningTime="2026-01-27 19:07:11.782432572 +0000 UTC m=+7806.881406121" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.869882 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-utilities\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.870222 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-catalog-content\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.870363 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljhjg\" (UniqueName: \"kubernetes.io/projected/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-kube-api-access-ljhjg\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.972858 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljhjg\" (UniqueName: \"kubernetes.io/projected/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-kube-api-access-ljhjg\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.973042 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-utilities\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.973109 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-catalog-content\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.973513 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-utilities\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:11 crc kubenswrapper[5049]: I0127 19:07:11.973552 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-catalog-content\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:12 crc kubenswrapper[5049]: I0127 19:07:11.999782 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljhjg\" (UniqueName: \"kubernetes.io/projected/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-kube-api-access-ljhjg\") pod \"redhat-operators-czm4r\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:12 crc kubenswrapper[5049]: I0127 19:07:12.064345 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:12 crc kubenswrapper[5049]: I0127 19:07:12.625451 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-czm4r"] Jan 27 19:07:12 crc kubenswrapper[5049]: W0127 19:07:12.626124 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d406e32_b6ab_4d2e_ab26_132c549a6dc8.slice/crio-1b30cbed0e2894994c8bda19ab89aac9e5c04077fd2184c0d94227044435aebf WatchSource:0}: Error finding container 1b30cbed0e2894994c8bda19ab89aac9e5c04077fd2184c0d94227044435aebf: Status 404 returned error can't find the container with id 1b30cbed0e2894994c8bda19ab89aac9e5c04077fd2184c0d94227044435aebf Jan 27 19:07:12 crc kubenswrapper[5049]: I0127 19:07:12.732598 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czm4r" event={"ID":"2d406e32-b6ab-4d2e-ab26-132c549a6dc8","Type":"ContainerStarted","Data":"1b30cbed0e2894994c8bda19ab89aac9e5c04077fd2184c0d94227044435aebf"} Jan 27 19:07:13 crc kubenswrapper[5049]: I0127 19:07:13.742567 5049 generic.go:334] "Generic (PLEG): container finished" podID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerID="41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704" exitCode=0 Jan 27 19:07:13 crc kubenswrapper[5049]: I0127 19:07:13.742664 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwgjl" event={"ID":"2778ca8a-f777-4571-9f19-e7c7992611d8","Type":"ContainerDied","Data":"41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704"} Jan 27 19:07:13 crc kubenswrapper[5049]: I0127 19:07:13.745121 5049 generic.go:334] "Generic (PLEG): container finished" podID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerID="a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e" exitCode=0 Jan 27 19:07:13 crc kubenswrapper[5049]: I0127 19:07:13.745176 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czm4r" event={"ID":"2d406e32-b6ab-4d2e-ab26-132c549a6dc8","Type":"ContainerDied","Data":"a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e"} Jan 27 19:07:14 crc kubenswrapper[5049]: I0127 19:07:14.766224 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwgjl" event={"ID":"2778ca8a-f777-4571-9f19-e7c7992611d8","Type":"ContainerStarted","Data":"d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520"} Jan 27 19:07:14 crc kubenswrapper[5049]: I0127 19:07:14.768532 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czm4r" event={"ID":"2d406e32-b6ab-4d2e-ab26-132c549a6dc8","Type":"ContainerStarted","Data":"c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e"} Jan 27 19:07:14 crc kubenswrapper[5049]: I0127 19:07:14.800232 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nwgjl" podStartSLOduration=2.31511396 podStartE2EDuration="6.800208313s" podCreationTimestamp="2026-01-27 19:07:08 +0000 UTC" firstStartedPulling="2026-01-27 19:07:09.695209638 +0000 UTC m=+7804.794183187" lastFinishedPulling="2026-01-27 19:07:14.180303991 +0000 UTC m=+7809.279277540" observedRunningTime="2026-01-27 19:07:14.789421113 +0000 UTC m=+7809.888394652" watchObservedRunningTime="2026-01-27 19:07:14.800208313 +0000 UTC m=+7809.899181862" Jan 27 19:07:16 crc kubenswrapper[5049]: I0127 19:07:16.786769 5049 generic.go:334] "Generic (PLEG): container finished" podID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerID="c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e" exitCode=0 Jan 27 19:07:16 crc kubenswrapper[5049]: I0127 19:07:16.786856 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czm4r" event={"ID":"2d406e32-b6ab-4d2e-ab26-132c549a6dc8","Type":"ContainerDied","Data":"c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e"} Jan 27 19:07:17 crc kubenswrapper[5049]: I0127 19:07:17.798316 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czm4r" event={"ID":"2d406e32-b6ab-4d2e-ab26-132c549a6dc8","Type":"ContainerStarted","Data":"757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e"} Jan 27 19:07:17 crc kubenswrapper[5049]: I0127 19:07:17.823270 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-czm4r" podStartSLOduration=3.045063875 podStartE2EDuration="6.823246266s" podCreationTimestamp="2026-01-27 19:07:11 +0000 UTC" firstStartedPulling="2026-01-27 19:07:13.747565017 +0000 UTC m=+7808.846538566" lastFinishedPulling="2026-01-27 19:07:17.525747398 +0000 UTC m=+7812.624720957" observedRunningTime="2026-01-27 19:07:17.816931364 +0000 UTC m=+7812.915904913" watchObservedRunningTime="2026-01-27 19:07:17.823246266 +0000 UTC m=+7812.922219845" Jan 27 19:07:18 crc kubenswrapper[5049]: I0127 19:07:18.247989 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:18 crc kubenswrapper[5049]: I0127 19:07:18.248299 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:18 crc kubenswrapper[5049]: I0127 19:07:18.295422 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:18 crc kubenswrapper[5049]: I0127 19:07:18.485102 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:18 crc kubenswrapper[5049]: I0127 19:07:18.485149 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:18 crc kubenswrapper[5049]: I0127 19:07:18.551066 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:18 crc kubenswrapper[5049]: I0127 19:07:18.854008 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.104941 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkv4v"] Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.105446 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xkv4v" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerName="registry-server" containerID="cri-o://c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971" gracePeriod=2 Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.567286 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.588157 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-utilities\") pod \"5a366ce2-40d3-40b0-a819-02dcebd0762c\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.588215 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr6wf\" (UniqueName: \"kubernetes.io/projected/5a366ce2-40d3-40b0-a819-02dcebd0762c-kube-api-access-kr6wf\") pod \"5a366ce2-40d3-40b0-a819-02dcebd0762c\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.588248 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-catalog-content\") pod \"5a366ce2-40d3-40b0-a819-02dcebd0762c\" (UID: \"5a366ce2-40d3-40b0-a819-02dcebd0762c\") " Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.589077 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-utilities" (OuterVolumeSpecName: "utilities") pod "5a366ce2-40d3-40b0-a819-02dcebd0762c" (UID: "5a366ce2-40d3-40b0-a819-02dcebd0762c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.595011 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a366ce2-40d3-40b0-a819-02dcebd0762c-kube-api-access-kr6wf" (OuterVolumeSpecName: "kube-api-access-kr6wf") pod "5a366ce2-40d3-40b0-a819-02dcebd0762c" (UID: "5a366ce2-40d3-40b0-a819-02dcebd0762c"). InnerVolumeSpecName "kube-api-access-kr6wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.638265 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a366ce2-40d3-40b0-a819-02dcebd0762c" (UID: "5a366ce2-40d3-40b0-a819-02dcebd0762c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.696061 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.696122 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr6wf\" (UniqueName: \"kubernetes.io/projected/5a366ce2-40d3-40b0-a819-02dcebd0762c-kube-api-access-kr6wf\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.696134 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a366ce2-40d3-40b0-a819-02dcebd0762c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.837346 5049 generic.go:334] "Generic (PLEG): container finished" podID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerID="c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971" exitCode=0 Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.837389 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkv4v" event={"ID":"5a366ce2-40d3-40b0-a819-02dcebd0762c","Type":"ContainerDied","Data":"c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971"} Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.837415 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkv4v" event={"ID":"5a366ce2-40d3-40b0-a819-02dcebd0762c","Type":"ContainerDied","Data":"1b2a7b90a94555b9a0975f0005d8520bb5026492aef1dd02690766f2ec38fb2e"} Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.837433 5049 scope.go:117] "RemoveContainer" containerID="c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.837559 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkv4v" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.862838 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkv4v"] Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.877548 5049 scope.go:117] "RemoveContainer" containerID="97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.882489 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xkv4v"] Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.908309 5049 scope.go:117] "RemoveContainer" containerID="de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.951132 5049 scope.go:117] "RemoveContainer" containerID="c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971" Jan 27 19:07:21 crc kubenswrapper[5049]: E0127 19:07:21.951712 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971\": container with ID starting with c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971 not found: ID does not exist" containerID="c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.951762 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971"} err="failed to get container status \"c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971\": rpc error: code = NotFound desc = could not find container \"c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971\": container with ID starting with c92f7eec52c97f3216e7a4aaf5867669245d1e1577f0aaea345f6e91b1a90971 not found: ID does not exist" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.951792 5049 scope.go:117] "RemoveContainer" containerID="97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4" Jan 27 19:07:21 crc kubenswrapper[5049]: E0127 19:07:21.952852 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4\": container with ID starting with 97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4 not found: ID does not exist" containerID="97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.952871 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4"} err="failed to get container status \"97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4\": rpc error: code = NotFound desc = could not find container \"97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4\": container with ID starting with 97d283b1e3d1ccd228f4ba29d49edcfe71b96686f456d9edff96b5966ba579c4 not found: ID does not exist" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.952885 5049 scope.go:117] "RemoveContainer" containerID="de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047" Jan 27 19:07:21 crc kubenswrapper[5049]: E0127 19:07:21.953201 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047\": container with ID starting with de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047 not found: ID does not exist" containerID="de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047" Jan 27 19:07:21 crc kubenswrapper[5049]: I0127 19:07:21.953233 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047"} err="failed to get container status \"de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047\": rpc error: code = NotFound desc = could not find container \"de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047\": container with ID starting with de0129b125dfe124f0b762aaa38ae9234cd48e4cc23ef7ce443d67230fc46047 not found: ID does not exist" Jan 27 19:07:22 crc kubenswrapper[5049]: I0127 19:07:22.065322 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:22 crc kubenswrapper[5049]: I0127 19:07:22.065650 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:23 crc kubenswrapper[5049]: I0127 19:07:23.113827 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-czm4r" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="registry-server" probeResult="failure" output=< Jan 27 19:07:23 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 19:07:23 crc kubenswrapper[5049]: > Jan 27 19:07:23 crc kubenswrapper[5049]: I0127 19:07:23.663826 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" path="/var/lib/kubelet/pods/5a366ce2-40d3-40b0-a819-02dcebd0762c/volumes" Jan 27 19:07:28 crc kubenswrapper[5049]: I0127 19:07:28.531885 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:28 crc kubenswrapper[5049]: I0127 19:07:28.577020 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwgjl"] Jan 27 19:07:28 crc kubenswrapper[5049]: I0127 19:07:28.900169 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nwgjl" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerName="registry-server" containerID="cri-o://d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520" gracePeriod=2 Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.381916 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.477367 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-utilities\") pod \"2778ca8a-f777-4571-9f19-e7c7992611d8\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.477492 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-catalog-content\") pod \"2778ca8a-f777-4571-9f19-e7c7992611d8\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.477777 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-756r4\" (UniqueName: \"kubernetes.io/projected/2778ca8a-f777-4571-9f19-e7c7992611d8-kube-api-access-756r4\") pod \"2778ca8a-f777-4571-9f19-e7c7992611d8\" (UID: \"2778ca8a-f777-4571-9f19-e7c7992611d8\") " Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.478217 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-utilities" (OuterVolumeSpecName: "utilities") pod "2778ca8a-f777-4571-9f19-e7c7992611d8" (UID: "2778ca8a-f777-4571-9f19-e7c7992611d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.485278 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.486640 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2778ca8a-f777-4571-9f19-e7c7992611d8-kube-api-access-756r4" (OuterVolumeSpecName: "kube-api-access-756r4") pod "2778ca8a-f777-4571-9f19-e7c7992611d8" (UID: "2778ca8a-f777-4571-9f19-e7c7992611d8"). InnerVolumeSpecName "kube-api-access-756r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.559534 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2778ca8a-f777-4571-9f19-e7c7992611d8" (UID: "2778ca8a-f777-4571-9f19-e7c7992611d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.586886 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2778ca8a-f777-4571-9f19-e7c7992611d8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.586922 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-756r4\" (UniqueName: \"kubernetes.io/projected/2778ca8a-f777-4571-9f19-e7c7992611d8-kube-api-access-756r4\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.912126 5049 generic.go:334] "Generic (PLEG): container finished" podID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerID="d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520" exitCode=0 Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.912191 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwgjl" event={"ID":"2778ca8a-f777-4571-9f19-e7c7992611d8","Type":"ContainerDied","Data":"d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520"} Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.912238 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwgjl" event={"ID":"2778ca8a-f777-4571-9f19-e7c7992611d8","Type":"ContainerDied","Data":"8e7e584cda6958caf5128b965cb884c4baee98b79a74b2a5cdaaa21a9259e053"} Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.912263 5049 scope.go:117] "RemoveContainer" containerID="d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.912191 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwgjl" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.939545 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwgjl"] Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.944159 5049 scope.go:117] "RemoveContainer" containerID="41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704" Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.946042 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nwgjl"] Jan 27 19:07:29 crc kubenswrapper[5049]: I0127 19:07:29.979842 5049 scope.go:117] "RemoveContainer" containerID="6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165" Jan 27 19:07:30 crc kubenswrapper[5049]: I0127 19:07:30.017404 5049 scope.go:117] "RemoveContainer" containerID="d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520" Jan 27 19:07:30 crc kubenswrapper[5049]: E0127 19:07:30.018257 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520\": container with ID starting with d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520 not found: ID does not exist" containerID="d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520" Jan 27 19:07:30 crc kubenswrapper[5049]: I0127 19:07:30.018294 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520"} err="failed to get container status \"d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520\": rpc error: code = NotFound desc = could not find container \"d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520\": container with ID starting with d119f6e04b325e5f1399a121d5b4f147d54d1dc3f52fead42cfdb123c7ec5520 not found: ID does not exist" Jan 27 19:07:30 crc kubenswrapper[5049]: I0127 19:07:30.018318 5049 scope.go:117] "RemoveContainer" containerID="41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704" Jan 27 19:07:30 crc kubenswrapper[5049]: E0127 19:07:30.019642 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704\": container with ID starting with 41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704 not found: ID does not exist" containerID="41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704" Jan 27 19:07:30 crc kubenswrapper[5049]: I0127 19:07:30.019796 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704"} err="failed to get container status \"41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704\": rpc error: code = NotFound desc = could not find container \"41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704\": container with ID starting with 41d211c2b0138f78aad8f1e0123ce0c1cd38875e4e5d9ddabec0c1498c344704 not found: ID does not exist" Jan 27 19:07:30 crc kubenswrapper[5049]: I0127 19:07:30.019868 5049 scope.go:117] "RemoveContainer" containerID="6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165" Jan 27 19:07:30 crc kubenswrapper[5049]: E0127 19:07:30.020842 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165\": container with ID starting with 6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165 not found: ID does not exist" containerID="6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165" Jan 27 19:07:30 crc kubenswrapper[5049]: I0127 19:07:30.020910 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165"} err="failed to get container status \"6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165\": rpc error: code = NotFound desc = could not find container \"6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165\": container with ID starting with 6ceb72fd2a8e90c02a41cd9ee1c20910c553985759df915a2da2af4ff691f165 not found: ID does not exist" Jan 27 19:07:31 crc kubenswrapper[5049]: I0127 19:07:31.660503 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" path="/var/lib/kubelet/pods/2778ca8a-f777-4571-9f19-e7c7992611d8/volumes" Jan 27 19:07:32 crc kubenswrapper[5049]: I0127 19:07:32.111844 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:32 crc kubenswrapper[5049]: I0127 19:07:32.159201 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:33 crc kubenswrapper[5049]: I0127 19:07:33.165920 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-czm4r"] Jan 27 19:07:33 crc kubenswrapper[5049]: I0127 19:07:33.950270 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-czm4r" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="registry-server" containerID="cri-o://757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e" gracePeriod=2 Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.401022 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.499357 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-utilities\") pod \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.499438 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljhjg\" (UniqueName: \"kubernetes.io/projected/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-kube-api-access-ljhjg\") pod \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.499588 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-catalog-content\") pod \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\" (UID: \"2d406e32-b6ab-4d2e-ab26-132c549a6dc8\") " Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.500690 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-utilities" (OuterVolumeSpecName: "utilities") pod "2d406e32-b6ab-4d2e-ab26-132c549a6dc8" (UID: "2d406e32-b6ab-4d2e-ab26-132c549a6dc8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.506194 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-kube-api-access-ljhjg" (OuterVolumeSpecName: "kube-api-access-ljhjg") pod "2d406e32-b6ab-4d2e-ab26-132c549a6dc8" (UID: "2d406e32-b6ab-4d2e-ab26-132c549a6dc8"). InnerVolumeSpecName "kube-api-access-ljhjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.601847 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.602199 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljhjg\" (UniqueName: \"kubernetes.io/projected/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-kube-api-access-ljhjg\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.646390 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d406e32-b6ab-4d2e-ab26-132c549a6dc8" (UID: "2d406e32-b6ab-4d2e-ab26-132c549a6dc8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.704573 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d406e32-b6ab-4d2e-ab26-132c549a6dc8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.967472 5049 generic.go:334] "Generic (PLEG): container finished" podID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerID="757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e" exitCode=0 Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.967525 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czm4r" event={"ID":"2d406e32-b6ab-4d2e-ab26-132c549a6dc8","Type":"ContainerDied","Data":"757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e"} Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.967559 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-czm4r" event={"ID":"2d406e32-b6ab-4d2e-ab26-132c549a6dc8","Type":"ContainerDied","Data":"1b30cbed0e2894994c8bda19ab89aac9e5c04077fd2184c0d94227044435aebf"} Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.967586 5049 scope.go:117] "RemoveContainer" containerID="757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.967774 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-czm4r" Jan 27 19:07:34 crc kubenswrapper[5049]: I0127 19:07:34.998835 5049 scope.go:117] "RemoveContainer" containerID="c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e" Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.011611 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-czm4r"] Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.021659 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-czm4r"] Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.029208 5049 scope.go:117] "RemoveContainer" containerID="a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e" Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.081978 5049 scope.go:117] "RemoveContainer" containerID="757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e" Jan 27 19:07:35 crc kubenswrapper[5049]: E0127 19:07:35.082872 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e\": container with ID starting with 757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e not found: ID does not exist" containerID="757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e" Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.082963 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e"} err="failed to get container status \"757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e\": rpc error: code = NotFound desc = could not find container \"757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e\": container with ID starting with 757f9d8e244291597507899118596b3ee63bd3f941aeb33a6eb4d1da67fc145e not found: ID does not exist" Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.083017 5049 scope.go:117] "RemoveContainer" containerID="c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e" Jan 27 19:07:35 crc kubenswrapper[5049]: E0127 19:07:35.083622 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e\": container with ID starting with c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e not found: ID does not exist" containerID="c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e" Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.083667 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e"} err="failed to get container status \"c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e\": rpc error: code = NotFound desc = could not find container \"c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e\": container with ID starting with c887db4f7f1a53c4d4361b1c48dc5245a158714fe42c989d67c0fe30e3f07e2e not found: ID does not exist" Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.083752 5049 scope.go:117] "RemoveContainer" containerID="a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e" Jan 27 19:07:35 crc kubenswrapper[5049]: E0127 19:07:35.084250 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e\": container with ID starting with a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e not found: ID does not exist" containerID="a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e" Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.084301 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e"} err="failed to get container status \"a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e\": rpc error: code = NotFound desc = could not find container \"a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e\": container with ID starting with a861c8c2cea204c40d6724cb3160a367065f47d99b2a0b6971022ce3ee8c826e not found: ID does not exist" Jan 27 19:07:35 crc kubenswrapper[5049]: I0127 19:07:35.658819 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" path="/var/lib/kubelet/pods/2d406e32-b6ab-4d2e-ab26-132c549a6dc8/volumes" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.077126 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlnx"] Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078112 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerName="extract-utilities" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078134 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerName="extract-utilities" Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078152 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerName="extract-utilities" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078159 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerName="extract-utilities" Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078170 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerName="extract-content" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078178 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerName="extract-content" Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078199 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="extract-utilities" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078206 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="extract-utilities" Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078221 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078229 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078252 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerName="extract-content" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078260 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerName="extract-content" Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078283 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="extract-content" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078290 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="extract-content" Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078303 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078311 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: E0127 19:08:06.078328 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078335 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078555 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="2778ca8a-f777-4571-9f19-e7c7992611d8" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078578 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d406e32-b6ab-4d2e-ab26-132c549a6dc8" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.078602 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a366ce2-40d3-40b0-a819-02dcebd0762c" containerName="registry-server" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.080624 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.089530 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlnx"] Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.153153 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-utilities\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.153268 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds4mx\" (UniqueName: \"kubernetes.io/projected/4e76f122-5771-446e-86a2-da8e554af581-kube-api-access-ds4mx\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.153330 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-catalog-content\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.255918 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-utilities\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.255972 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds4mx\" (UniqueName: \"kubernetes.io/projected/4e76f122-5771-446e-86a2-da8e554af581-kube-api-access-ds4mx\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.256013 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-catalog-content\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.256448 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-utilities\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.256500 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-catalog-content\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.278465 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds4mx\" (UniqueName: \"kubernetes.io/projected/4e76f122-5771-446e-86a2-da8e554af581-kube-api-access-ds4mx\") pod \"redhat-marketplace-6hlnx\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:06 crc kubenswrapper[5049]: I0127 19:08:06.402908 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:07 crc kubenswrapper[5049]: I0127 19:08:06.923057 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlnx"] Jan 27 19:08:07 crc kubenswrapper[5049]: W0127 19:08:06.941826 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e76f122_5771_446e_86a2_da8e554af581.slice/crio-5258e5eed7e9489a55229b47b832f343881145d476141108275a3b3a7a123077 WatchSource:0}: Error finding container 5258e5eed7e9489a55229b47b832f343881145d476141108275a3b3a7a123077: Status 404 returned error can't find the container with id 5258e5eed7e9489a55229b47b832f343881145d476141108275a3b3a7a123077 Jan 27 19:08:07 crc kubenswrapper[5049]: I0127 19:08:07.250403 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e76f122-5771-446e-86a2-da8e554af581" containerID="0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00" exitCode=0 Jan 27 19:08:07 crc kubenswrapper[5049]: I0127 19:08:07.250504 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlnx" event={"ID":"4e76f122-5771-446e-86a2-da8e554af581","Type":"ContainerDied","Data":"0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00"} Jan 27 19:08:07 crc kubenswrapper[5049]: I0127 19:08:07.250705 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlnx" event={"ID":"4e76f122-5771-446e-86a2-da8e554af581","Type":"ContainerStarted","Data":"5258e5eed7e9489a55229b47b832f343881145d476141108275a3b3a7a123077"} Jan 27 19:08:08 crc kubenswrapper[5049]: I0127 19:08:08.259832 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e76f122-5771-446e-86a2-da8e554af581" containerID="49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4" exitCode=0 Jan 27 19:08:08 crc kubenswrapper[5049]: I0127 19:08:08.259899 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlnx" event={"ID":"4e76f122-5771-446e-86a2-da8e554af581","Type":"ContainerDied","Data":"49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4"} Jan 27 19:08:09 crc kubenswrapper[5049]: I0127 19:08:09.270148 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlnx" event={"ID":"4e76f122-5771-446e-86a2-da8e554af581","Type":"ContainerStarted","Data":"0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959"} Jan 27 19:08:09 crc kubenswrapper[5049]: I0127 19:08:09.290503 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6hlnx" podStartSLOduration=1.906204465 podStartE2EDuration="3.2904861s" podCreationTimestamp="2026-01-27 19:08:06 +0000 UTC" firstStartedPulling="2026-01-27 19:08:07.254703375 +0000 UTC m=+7862.353676924" lastFinishedPulling="2026-01-27 19:08:08.63898501 +0000 UTC m=+7863.737958559" observedRunningTime="2026-01-27 19:08:09.289052609 +0000 UTC m=+7864.388026178" watchObservedRunningTime="2026-01-27 19:08:09.2904861 +0000 UTC m=+7864.389459649" Jan 27 19:08:16 crc kubenswrapper[5049]: I0127 19:08:16.403528 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:16 crc kubenswrapper[5049]: I0127 19:08:16.404193 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:16 crc kubenswrapper[5049]: I0127 19:08:16.447622 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:17 crc kubenswrapper[5049]: I0127 19:08:17.395458 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:18 crc kubenswrapper[5049]: I0127 19:08:18.031347 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlnx"] Jan 27 19:08:19 crc kubenswrapper[5049]: I0127 19:08:19.361999 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6hlnx" podUID="4e76f122-5771-446e-86a2-da8e554af581" containerName="registry-server" containerID="cri-o://0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959" gracePeriod=2 Jan 27 19:08:19 crc kubenswrapper[5049]: I0127 19:08:19.842957 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:19 crc kubenswrapper[5049]: I0127 19:08:19.941151 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-utilities\") pod \"4e76f122-5771-446e-86a2-da8e554af581\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " Jan 27 19:08:19 crc kubenswrapper[5049]: I0127 19:08:19.941195 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-catalog-content\") pod \"4e76f122-5771-446e-86a2-da8e554af581\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " Jan 27 19:08:19 crc kubenswrapper[5049]: I0127 19:08:19.941270 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds4mx\" (UniqueName: \"kubernetes.io/projected/4e76f122-5771-446e-86a2-da8e554af581-kube-api-access-ds4mx\") pod \"4e76f122-5771-446e-86a2-da8e554af581\" (UID: \"4e76f122-5771-446e-86a2-da8e554af581\") " Jan 27 19:08:19 crc kubenswrapper[5049]: I0127 19:08:19.942288 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-utilities" (OuterVolumeSpecName: "utilities") pod "4e76f122-5771-446e-86a2-da8e554af581" (UID: "4e76f122-5771-446e-86a2-da8e554af581"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:08:19 crc kubenswrapper[5049]: I0127 19:08:19.947936 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e76f122-5771-446e-86a2-da8e554af581-kube-api-access-ds4mx" (OuterVolumeSpecName: "kube-api-access-ds4mx") pod "4e76f122-5771-446e-86a2-da8e554af581" (UID: "4e76f122-5771-446e-86a2-da8e554af581"). InnerVolumeSpecName "kube-api-access-ds4mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:08:19 crc kubenswrapper[5049]: I0127 19:08:19.965393 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e76f122-5771-446e-86a2-da8e554af581" (UID: "4e76f122-5771-446e-86a2-da8e554af581"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.043272 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.043517 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e76f122-5771-446e-86a2-da8e554af581-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.043593 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds4mx\" (UniqueName: \"kubernetes.io/projected/4e76f122-5771-446e-86a2-da8e554af581-kube-api-access-ds4mx\") on node \"crc\" DevicePath \"\"" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.372830 5049 generic.go:334] "Generic (PLEG): container finished" podID="4e76f122-5771-446e-86a2-da8e554af581" containerID="0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959" exitCode=0 Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.372876 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlnx" event={"ID":"4e76f122-5771-446e-86a2-da8e554af581","Type":"ContainerDied","Data":"0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959"} Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.372905 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6hlnx" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.372938 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6hlnx" event={"ID":"4e76f122-5771-446e-86a2-da8e554af581","Type":"ContainerDied","Data":"5258e5eed7e9489a55229b47b832f343881145d476141108275a3b3a7a123077"} Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.372963 5049 scope.go:117] "RemoveContainer" containerID="0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.405834 5049 scope.go:117] "RemoveContainer" containerID="49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.410651 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlnx"] Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.419348 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6hlnx"] Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.429443 5049 scope.go:117] "RemoveContainer" containerID="0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.513646 5049 scope.go:117] "RemoveContainer" containerID="0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959" Jan 27 19:08:20 crc kubenswrapper[5049]: E0127 19:08:20.514636 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959\": container with ID starting with 0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959 not found: ID does not exist" containerID="0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.514871 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959"} err="failed to get container status \"0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959\": rpc error: code = NotFound desc = could not find container \"0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959\": container with ID starting with 0e6797da542b23059b29360bfec85f4125e66a7e40c0d5f71406807ab8533959 not found: ID does not exist" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.515229 5049 scope.go:117] "RemoveContainer" containerID="49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4" Jan 27 19:08:20 crc kubenswrapper[5049]: E0127 19:08:20.516149 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4\": container with ID starting with 49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4 not found: ID does not exist" containerID="49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.516184 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4"} err="failed to get container status \"49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4\": rpc error: code = NotFound desc = could not find container \"49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4\": container with ID starting with 49b0da545aef816027a3d425ccf5a1ddb8e91ccc75390b88469501a9a30376c4 not found: ID does not exist" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.516205 5049 scope.go:117] "RemoveContainer" containerID="0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00" Jan 27 19:08:20 crc kubenswrapper[5049]: E0127 19:08:20.517031 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00\": container with ID starting with 0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00 not found: ID does not exist" containerID="0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00" Jan 27 19:08:20 crc kubenswrapper[5049]: I0127 19:08:20.517063 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00"} err="failed to get container status \"0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00\": rpc error: code = NotFound desc = could not find container \"0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00\": container with ID starting with 0e83413e5d4e02d4eefbec15ad320ec379592a35c5f63889d97024d6645f4a00 not found: ID does not exist" Jan 27 19:08:21 crc kubenswrapper[5049]: I0127 19:08:21.665663 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e76f122-5771-446e-86a2-da8e554af581" path="/var/lib/kubelet/pods/4e76f122-5771-446e-86a2-da8e554af581/volumes" Jan 27 19:08:47 crc kubenswrapper[5049]: I0127 19:08:47.781029 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:08:47 crc kubenswrapper[5049]: I0127 19:08:47.781619 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:09:17 crc kubenswrapper[5049]: I0127 19:09:17.781173 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:09:17 crc kubenswrapper[5049]: I0127 19:09:17.782210 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:09:47 crc kubenswrapper[5049]: I0127 19:09:47.781731 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:09:47 crc kubenswrapper[5049]: I0127 19:09:47.782528 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:09:47 crc kubenswrapper[5049]: I0127 19:09:47.782604 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:09:47 crc kubenswrapper[5049]: I0127 19:09:47.784005 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa623068b6bfd43b77a27803fad01500d85c86f42f3e20f99c90e696869ae1ac"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:09:47 crc kubenswrapper[5049]: I0127 19:09:47.784113 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://fa623068b6bfd43b77a27803fad01500d85c86f42f3e20f99c90e696869ae1ac" gracePeriod=600 Jan 27 19:09:48 crc kubenswrapper[5049]: I0127 19:09:48.128345 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="fa623068b6bfd43b77a27803fad01500d85c86f42f3e20f99c90e696869ae1ac" exitCode=0 Jan 27 19:09:48 crc kubenswrapper[5049]: I0127 19:09:48.128413 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"fa623068b6bfd43b77a27803fad01500d85c86f42f3e20f99c90e696869ae1ac"} Jan 27 19:09:48 crc kubenswrapper[5049]: I0127 19:09:48.128748 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf"} Jan 27 19:09:48 crc kubenswrapper[5049]: I0127 19:09:48.128768 5049 scope.go:117] "RemoveContainer" containerID="0cddc6f753c66e2bb0a98412576c5e50ded6c9708727f623c75846d20e3ae26a" Jan 27 19:12:17 crc kubenswrapper[5049]: I0127 19:12:17.781549 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:12:17 crc kubenswrapper[5049]: I0127 19:12:17.782366 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:12:47 crc kubenswrapper[5049]: I0127 19:12:47.782054 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:12:47 crc kubenswrapper[5049]: I0127 19:12:47.782654 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:13:17 crc kubenswrapper[5049]: I0127 19:13:17.781776 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:13:17 crc kubenswrapper[5049]: I0127 19:13:17.782364 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:13:17 crc kubenswrapper[5049]: I0127 19:13:17.782422 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:13:17 crc kubenswrapper[5049]: I0127 19:13:17.783207 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:13:17 crc kubenswrapper[5049]: I0127 19:13:17.783266 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" gracePeriod=600 Jan 27 19:13:17 crc kubenswrapper[5049]: E0127 19:13:17.966215 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:13:18 crc kubenswrapper[5049]: I0127 19:13:18.721897 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" exitCode=0 Jan 27 19:13:18 crc kubenswrapper[5049]: I0127 19:13:18.721965 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf"} Jan 27 19:13:18 crc kubenswrapper[5049]: I0127 19:13:18.722398 5049 scope.go:117] "RemoveContainer" containerID="fa623068b6bfd43b77a27803fad01500d85c86f42f3e20f99c90e696869ae1ac" Jan 27 19:13:18 crc kubenswrapper[5049]: I0127 19:13:18.723105 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:13:18 crc kubenswrapper[5049]: E0127 19:13:18.723382 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:13:33 crc kubenswrapper[5049]: I0127 19:13:33.646086 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:13:33 crc kubenswrapper[5049]: E0127 19:13:33.646844 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:13:44 crc kubenswrapper[5049]: I0127 19:13:44.646432 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:13:44 crc kubenswrapper[5049]: E0127 19:13:44.648337 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:13:57 crc kubenswrapper[5049]: I0127 19:13:57.645974 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:13:57 crc kubenswrapper[5049]: E0127 19:13:57.646950 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:14:11 crc kubenswrapper[5049]: I0127 19:14:11.646568 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:14:11 crc kubenswrapper[5049]: E0127 19:14:11.647486 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:14:25 crc kubenswrapper[5049]: I0127 19:14:25.651718 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:14:25 crc kubenswrapper[5049]: E0127 19:14:25.652436 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:14:39 crc kubenswrapper[5049]: I0127 19:14:39.646748 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:14:39 crc kubenswrapper[5049]: E0127 19:14:39.647759 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:14:53 crc kubenswrapper[5049]: I0127 19:14:53.646767 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:14:53 crc kubenswrapper[5049]: E0127 19:14:53.647473 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.163659 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb"] Jan 27 19:15:00 crc kubenswrapper[5049]: E0127 19:15:00.164573 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e76f122-5771-446e-86a2-da8e554af581" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.164585 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e76f122-5771-446e-86a2-da8e554af581" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[5049]: E0127 19:15:00.164603 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e76f122-5771-446e-86a2-da8e554af581" containerName="extract-content" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.164609 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e76f122-5771-446e-86a2-da8e554af581" containerName="extract-content" Jan 27 19:15:00 crc kubenswrapper[5049]: E0127 19:15:00.164616 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e76f122-5771-446e-86a2-da8e554af581" containerName="extract-utilities" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.164623 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e76f122-5771-446e-86a2-da8e554af581" containerName="extract-utilities" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.164830 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e76f122-5771-446e-86a2-da8e554af581" containerName="registry-server" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.165485 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.170366 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb"] Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.170768 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.171074 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.202004 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fa286911-0042-48ed-bf82-4173e2b86c91-secret-volume\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.202098 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa286911-0042-48ed-bf82-4173e2b86c91-config-volume\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.202138 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvwg\" (UniqueName: \"kubernetes.io/projected/fa286911-0042-48ed-bf82-4173e2b86c91-kube-api-access-2qvwg\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.303505 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa286911-0042-48ed-bf82-4173e2b86c91-config-volume\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.303563 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvwg\" (UniqueName: \"kubernetes.io/projected/fa286911-0042-48ed-bf82-4173e2b86c91-kube-api-access-2qvwg\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.304011 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fa286911-0042-48ed-bf82-4173e2b86c91-secret-volume\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.304830 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa286911-0042-48ed-bf82-4173e2b86c91-config-volume\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.310747 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fa286911-0042-48ed-bf82-4173e2b86c91-secret-volume\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.319048 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvwg\" (UniqueName: \"kubernetes.io/projected/fa286911-0042-48ed-bf82-4173e2b86c91-kube-api-access-2qvwg\") pod \"collect-profiles-29492355-xwqwb\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.519391 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:00 crc kubenswrapper[5049]: I0127 19:15:00.994364 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb"] Jan 27 19:15:00 crc kubenswrapper[5049]: W0127 19:15:00.998970 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa286911_0042_48ed_bf82_4173e2b86c91.slice/crio-744e8fe9aa909d1b0bd5bd973f384cf7fbe47a414a748d1d7028114b51018432 WatchSource:0}: Error finding container 744e8fe9aa909d1b0bd5bd973f384cf7fbe47a414a748d1d7028114b51018432: Status 404 returned error can't find the container with id 744e8fe9aa909d1b0bd5bd973f384cf7fbe47a414a748d1d7028114b51018432 Jan 27 19:15:01 crc kubenswrapper[5049]: I0127 19:15:01.569103 5049 generic.go:334] "Generic (PLEG): container finished" podID="fa286911-0042-48ed-bf82-4173e2b86c91" containerID="fd6dad9fb8b0fd1f05c9197105abd0bcf25d240915f2f1456c058c00c1cc5d04" exitCode=0 Jan 27 19:15:01 crc kubenswrapper[5049]: I0127 19:15:01.569239 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" event={"ID":"fa286911-0042-48ed-bf82-4173e2b86c91","Type":"ContainerDied","Data":"fd6dad9fb8b0fd1f05c9197105abd0bcf25d240915f2f1456c058c00c1cc5d04"} Jan 27 19:15:01 crc kubenswrapper[5049]: I0127 19:15:01.569427 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" event={"ID":"fa286911-0042-48ed-bf82-4173e2b86c91","Type":"ContainerStarted","Data":"744e8fe9aa909d1b0bd5bd973f384cf7fbe47a414a748d1d7028114b51018432"} Jan 27 19:15:02 crc kubenswrapper[5049]: I0127 19:15:02.943726 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.054561 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fa286911-0042-48ed-bf82-4173e2b86c91-secret-volume\") pod \"fa286911-0042-48ed-bf82-4173e2b86c91\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.054719 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa286911-0042-48ed-bf82-4173e2b86c91-config-volume\") pod \"fa286911-0042-48ed-bf82-4173e2b86c91\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.054778 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qvwg\" (UniqueName: \"kubernetes.io/projected/fa286911-0042-48ed-bf82-4173e2b86c91-kube-api-access-2qvwg\") pod \"fa286911-0042-48ed-bf82-4173e2b86c91\" (UID: \"fa286911-0042-48ed-bf82-4173e2b86c91\") " Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.055826 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa286911-0042-48ed-bf82-4173e2b86c91-config-volume" (OuterVolumeSpecName: "config-volume") pod "fa286911-0042-48ed-bf82-4173e2b86c91" (UID: "fa286911-0042-48ed-bf82-4173e2b86c91"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.061269 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa286911-0042-48ed-bf82-4173e2b86c91-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fa286911-0042-48ed-bf82-4173e2b86c91" (UID: "fa286911-0042-48ed-bf82-4173e2b86c91"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.063907 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa286911-0042-48ed-bf82-4173e2b86c91-kube-api-access-2qvwg" (OuterVolumeSpecName: "kube-api-access-2qvwg") pod "fa286911-0042-48ed-bf82-4173e2b86c91" (UID: "fa286911-0042-48ed-bf82-4173e2b86c91"). InnerVolumeSpecName "kube-api-access-2qvwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.156873 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fa286911-0042-48ed-bf82-4173e2b86c91-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.156904 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa286911-0042-48ed-bf82-4173e2b86c91-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.156920 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qvwg\" (UniqueName: \"kubernetes.io/projected/fa286911-0042-48ed-bf82-4173e2b86c91-kube-api-access-2qvwg\") on node \"crc\" DevicePath \"\"" Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.586918 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" event={"ID":"fa286911-0042-48ed-bf82-4173e2b86c91","Type":"ContainerDied","Data":"744e8fe9aa909d1b0bd5bd973f384cf7fbe47a414a748d1d7028114b51018432"} Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.586964 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="744e8fe9aa909d1b0bd5bd973f384cf7fbe47a414a748d1d7028114b51018432" Jan 27 19:15:03 crc kubenswrapper[5049]: I0127 19:15:03.586975 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492355-xwqwb" Jan 27 19:15:04 crc kubenswrapper[5049]: I0127 19:15:04.018547 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk"] Jan 27 19:15:04 crc kubenswrapper[5049]: I0127 19:15:04.025898 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492310-287vk"] Jan 27 19:15:05 crc kubenswrapper[5049]: I0127 19:15:05.660432 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e100adee-5dff-47a6-92cf-63eee7ba45b5" path="/var/lib/kubelet/pods/e100adee-5dff-47a6-92cf-63eee7ba45b5/volumes" Jan 27 19:15:07 crc kubenswrapper[5049]: I0127 19:15:07.645723 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:15:07 crc kubenswrapper[5049]: E0127 19:15:07.646627 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:15:20 crc kubenswrapper[5049]: I0127 19:15:20.603353 5049 scope.go:117] "RemoveContainer" containerID="d6accd3cc8c4d9e8d878f385641015d13a6a7d327de19d89cdd070c4c6688b00" Jan 27 19:15:22 crc kubenswrapper[5049]: I0127 19:15:22.646305 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:15:22 crc kubenswrapper[5049]: E0127 19:15:22.647109 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:15:33 crc kubenswrapper[5049]: I0127 19:15:33.646169 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:15:33 crc kubenswrapper[5049]: E0127 19:15:33.647046 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:15:45 crc kubenswrapper[5049]: I0127 19:15:45.651489 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:15:45 crc kubenswrapper[5049]: E0127 19:15:45.652414 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:15:58 crc kubenswrapper[5049]: I0127 19:15:58.646275 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:15:58 crc kubenswrapper[5049]: E0127 19:15:58.647366 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:16:11 crc kubenswrapper[5049]: I0127 19:16:11.646479 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:16:11 crc kubenswrapper[5049]: E0127 19:16:11.647463 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:16:24 crc kubenswrapper[5049]: I0127 19:16:24.646072 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:16:24 crc kubenswrapper[5049]: E0127 19:16:24.647059 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:16:35 crc kubenswrapper[5049]: I0127 19:16:35.653343 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:16:35 crc kubenswrapper[5049]: E0127 19:16:35.654113 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:16:47 crc kubenswrapper[5049]: I0127 19:16:47.647010 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:16:47 crc kubenswrapper[5049]: E0127 19:16:47.647910 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:17:00 crc kubenswrapper[5049]: I0127 19:17:00.646461 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:17:00 crc kubenswrapper[5049]: E0127 19:17:00.647318 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:17:14 crc kubenswrapper[5049]: I0127 19:17:14.646585 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:17:14 crc kubenswrapper[5049]: E0127 19:17:14.648492 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:17:28 crc kubenswrapper[5049]: I0127 19:17:28.646575 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:17:28 crc kubenswrapper[5049]: E0127 19:17:28.647434 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:17:42 crc kubenswrapper[5049]: I0127 19:17:42.646430 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:17:42 crc kubenswrapper[5049]: E0127 19:17:42.647201 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.598833 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2khct"] Jan 27 19:17:50 crc kubenswrapper[5049]: E0127 19:17:50.599913 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa286911-0042-48ed-bf82-4173e2b86c91" containerName="collect-profiles" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.599930 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa286911-0042-48ed-bf82-4173e2b86c91" containerName="collect-profiles" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.600135 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa286911-0042-48ed-bf82-4173e2b86c91" containerName="collect-profiles" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.601861 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.621695 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2khct"] Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.684709 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6blv8\" (UniqueName: \"kubernetes.io/projected/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-kube-api-access-6blv8\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.684778 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-utilities\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.684837 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-catalog-content\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.787068 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6blv8\" (UniqueName: \"kubernetes.io/projected/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-kube-api-access-6blv8\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.787142 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-utilities\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.787185 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-catalog-content\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.787733 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-utilities\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.787764 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-catalog-content\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.808597 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6blv8\" (UniqueName: \"kubernetes.io/projected/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-kube-api-access-6blv8\") pod \"certified-operators-2khct\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:50 crc kubenswrapper[5049]: I0127 19:17:50.920466 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:17:51 crc kubenswrapper[5049]: I0127 19:17:51.507803 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2khct"] Jan 27 19:17:52 crc kubenswrapper[5049]: I0127 19:17:52.212352 5049 generic.go:334] "Generic (PLEG): container finished" podID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerID="6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c" exitCode=0 Jan 27 19:17:52 crc kubenswrapper[5049]: I0127 19:17:52.212700 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2khct" event={"ID":"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970","Type":"ContainerDied","Data":"6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c"} Jan 27 19:17:52 crc kubenswrapper[5049]: I0127 19:17:52.212730 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2khct" event={"ID":"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970","Type":"ContainerStarted","Data":"617aa981a68317b320224894c880f88dc73e8b94d56b2be6a4403c7529ccfc1b"} Jan 27 19:17:52 crc kubenswrapper[5049]: I0127 19:17:52.216452 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:17:53 crc kubenswrapper[5049]: I0127 19:17:53.226389 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2khct" event={"ID":"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970","Type":"ContainerStarted","Data":"32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5"} Jan 27 19:17:53 crc kubenswrapper[5049]: I0127 19:17:53.646471 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:17:53 crc kubenswrapper[5049]: E0127 19:17:53.646780 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:17:54 crc kubenswrapper[5049]: I0127 19:17:54.250055 5049 generic.go:334] "Generic (PLEG): container finished" podID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerID="32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5" exitCode=0 Jan 27 19:17:54 crc kubenswrapper[5049]: I0127 19:17:54.250095 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2khct" event={"ID":"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970","Type":"ContainerDied","Data":"32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5"} Jan 27 19:17:55 crc kubenswrapper[5049]: I0127 19:17:55.261384 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2khct" event={"ID":"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970","Type":"ContainerStarted","Data":"5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462"} Jan 27 19:17:55 crc kubenswrapper[5049]: I0127 19:17:55.292081 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2khct" podStartSLOduration=2.470726061 podStartE2EDuration="5.292064002s" podCreationTimestamp="2026-01-27 19:17:50 +0000 UTC" firstStartedPulling="2026-01-27 19:17:52.21446816 +0000 UTC m=+8447.313441719" lastFinishedPulling="2026-01-27 19:17:55.035806111 +0000 UTC m=+8450.134779660" observedRunningTime="2026-01-27 19:17:55.285078953 +0000 UTC m=+8450.384052522" watchObservedRunningTime="2026-01-27 19:17:55.292064002 +0000 UTC m=+8450.391037551" Jan 27 19:18:00 crc kubenswrapper[5049]: I0127 19:18:00.922248 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:18:00 crc kubenswrapper[5049]: I0127 19:18:00.922869 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:18:00 crc kubenswrapper[5049]: I0127 19:18:00.970419 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:18:01 crc kubenswrapper[5049]: I0127 19:18:01.356838 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:18:01 crc kubenswrapper[5049]: I0127 19:18:01.404910 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2khct"] Jan 27 19:18:03 crc kubenswrapper[5049]: I0127 19:18:03.324179 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2khct" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerName="registry-server" containerID="cri-o://5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462" gracePeriod=2 Jan 27 19:18:03 crc kubenswrapper[5049]: I0127 19:18:03.889169 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:18:03 crc kubenswrapper[5049]: I0127 19:18:03.987114 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-catalog-content\") pod \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " Jan 27 19:18:03 crc kubenswrapper[5049]: I0127 19:18:03.987334 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6blv8\" (UniqueName: \"kubernetes.io/projected/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-kube-api-access-6blv8\") pod \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " Jan 27 19:18:03 crc kubenswrapper[5049]: I0127 19:18:03.987427 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-utilities\") pod \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\" (UID: \"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970\") " Jan 27 19:18:03 crc kubenswrapper[5049]: I0127 19:18:03.988398 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-utilities" (OuterVolumeSpecName: "utilities") pod "2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" (UID: "2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:18:03 crc kubenswrapper[5049]: I0127 19:18:03.998615 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-kube-api-access-6blv8" (OuterVolumeSpecName: "kube-api-access-6blv8") pod "2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" (UID: "2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970"). InnerVolumeSpecName "kube-api-access-6blv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.089556 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.089599 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6blv8\" (UniqueName: \"kubernetes.io/projected/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-kube-api-access-6blv8\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.336910 5049 generic.go:334] "Generic (PLEG): container finished" podID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerID="5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462" exitCode=0 Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.337008 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2khct" event={"ID":"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970","Type":"ContainerDied","Data":"5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462"} Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.337271 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2khct" event={"ID":"2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970","Type":"ContainerDied","Data":"617aa981a68317b320224894c880f88dc73e8b94d56b2be6a4403c7529ccfc1b"} Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.337293 5049 scope.go:117] "RemoveContainer" containerID="5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.337033 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2khct" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.359892 5049 scope.go:117] "RemoveContainer" containerID="32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.383103 5049 scope.go:117] "RemoveContainer" containerID="6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.434369 5049 scope.go:117] "RemoveContainer" containerID="5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462" Jan 27 19:18:04 crc kubenswrapper[5049]: E0127 19:18:04.434874 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462\": container with ID starting with 5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462 not found: ID does not exist" containerID="5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.434911 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462"} err="failed to get container status \"5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462\": rpc error: code = NotFound desc = could not find container \"5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462\": container with ID starting with 5cfb71a8b668f4ef304ceb97f863c36b8d59871d1cf5977ac50bdfaca983e462 not found: ID does not exist" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.434938 5049 scope.go:117] "RemoveContainer" containerID="32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5" Jan 27 19:18:04 crc kubenswrapper[5049]: E0127 19:18:04.435418 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5\": container with ID starting with 32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5 not found: ID does not exist" containerID="32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.435471 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5"} err="failed to get container status \"32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5\": rpc error: code = NotFound desc = could not find container \"32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5\": container with ID starting with 32739f02ac5c17313cbdf2e8984954905f6672bd2899fa36e08f560fe32a01e5 not found: ID does not exist" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.435505 5049 scope.go:117] "RemoveContainer" containerID="6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c" Jan 27 19:18:04 crc kubenswrapper[5049]: E0127 19:18:04.435946 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c\": container with ID starting with 6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c not found: ID does not exist" containerID="6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.435970 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c"} err="failed to get container status \"6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c\": rpc error: code = NotFound desc = could not find container \"6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c\": container with ID starting with 6d603f2f32cb852733d1a712cbbafbe9f83d65bb314ce24969d8d0b75e42057c not found: ID does not exist" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.515624 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" (UID: "2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.602879 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.699109 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2khct"] Jan 27 19:18:04 crc kubenswrapper[5049]: I0127 19:18:04.709653 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2khct"] Jan 27 19:18:05 crc kubenswrapper[5049]: I0127 19:18:05.657424 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" path="/var/lib/kubelet/pods/2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970/volumes" Jan 27 19:18:06 crc kubenswrapper[5049]: I0127 19:18:06.646050 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:18:06 crc kubenswrapper[5049]: E0127 19:18:06.646760 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:18:19 crc kubenswrapper[5049]: I0127 19:18:19.646366 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:18:20 crc kubenswrapper[5049]: I0127 19:18:20.469080 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"fe28027e9d5bac15184fb5925884f4ee5b93dff477b09128301855d8ebed5e23"} Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.791979 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6gx2s"] Jan 27 19:18:22 crc kubenswrapper[5049]: E0127 19:18:22.792967 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerName="extract-utilities" Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.792981 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerName="extract-utilities" Jan 27 19:18:22 crc kubenswrapper[5049]: E0127 19:18:22.793029 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerName="registry-server" Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.793037 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerName="registry-server" Jan 27 19:18:22 crc kubenswrapper[5049]: E0127 19:18:22.793049 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerName="extract-content" Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.793055 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerName="extract-content" Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.806016 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff0f4c7-a6b6-41c5-8c95-e2b8eb675970" containerName="registry-server" Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.807650 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gx2s"] Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.807770 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.957604 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-utilities\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.957659 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4bbm\" (UniqueName: \"kubernetes.io/projected/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-kube-api-access-w4bbm\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:22 crc kubenswrapper[5049]: I0127 19:18:22.957773 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-catalog-content\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:23 crc kubenswrapper[5049]: I0127 19:18:23.059954 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-utilities\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:23 crc kubenswrapper[5049]: I0127 19:18:23.060001 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4bbm\" (UniqueName: \"kubernetes.io/projected/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-kube-api-access-w4bbm\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:23 crc kubenswrapper[5049]: I0127 19:18:23.060066 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-catalog-content\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:23 crc kubenswrapper[5049]: I0127 19:18:23.060532 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-catalog-content\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:23 crc kubenswrapper[5049]: I0127 19:18:23.060773 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-utilities\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:23 crc kubenswrapper[5049]: I0127 19:18:23.081830 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4bbm\" (UniqueName: \"kubernetes.io/projected/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-kube-api-access-w4bbm\") pod \"redhat-marketplace-6gx2s\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:23 crc kubenswrapper[5049]: I0127 19:18:23.135662 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:23 crc kubenswrapper[5049]: I0127 19:18:23.734265 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gx2s"] Jan 27 19:18:23 crc kubenswrapper[5049]: W0127 19:18:23.739956 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f1c3c2d_212e_472d_8bb6_a5bdacc91fbd.slice/crio-7735de8b9bb7d9ef6a68b2daa458472304afedf8dcea4403fc54fe6362e7e8f7 WatchSource:0}: Error finding container 7735de8b9bb7d9ef6a68b2daa458472304afedf8dcea4403fc54fe6362e7e8f7: Status 404 returned error can't find the container with id 7735de8b9bb7d9ef6a68b2daa458472304afedf8dcea4403fc54fe6362e7e8f7 Jan 27 19:18:24 crc kubenswrapper[5049]: I0127 19:18:24.500784 5049 generic.go:334] "Generic (PLEG): container finished" podID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerID="7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052" exitCode=0 Jan 27 19:18:24 crc kubenswrapper[5049]: I0127 19:18:24.500832 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gx2s" event={"ID":"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd","Type":"ContainerDied","Data":"7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052"} Jan 27 19:18:24 crc kubenswrapper[5049]: I0127 19:18:24.501307 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gx2s" event={"ID":"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd","Type":"ContainerStarted","Data":"7735de8b9bb7d9ef6a68b2daa458472304afedf8dcea4403fc54fe6362e7e8f7"} Jan 27 19:18:26 crc kubenswrapper[5049]: I0127 19:18:26.527461 5049 generic.go:334] "Generic (PLEG): container finished" podID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerID="5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4" exitCode=0 Jan 27 19:18:26 crc kubenswrapper[5049]: I0127 19:18:26.527520 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gx2s" event={"ID":"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd","Type":"ContainerDied","Data":"5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4"} Jan 27 19:18:27 crc kubenswrapper[5049]: I0127 19:18:27.539229 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gx2s" event={"ID":"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd","Type":"ContainerStarted","Data":"2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4"} Jan 27 19:18:33 crc kubenswrapper[5049]: I0127 19:18:33.137036 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:33 crc kubenswrapper[5049]: I0127 19:18:33.137637 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:33 crc kubenswrapper[5049]: I0127 19:18:33.184932 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:33 crc kubenswrapper[5049]: I0127 19:18:33.212124 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6gx2s" podStartSLOduration=8.770014496 podStartE2EDuration="11.212100187s" podCreationTimestamp="2026-01-27 19:18:22 +0000 UTC" firstStartedPulling="2026-01-27 19:18:24.503213967 +0000 UTC m=+8479.602187516" lastFinishedPulling="2026-01-27 19:18:26.945299658 +0000 UTC m=+8482.044273207" observedRunningTime="2026-01-27 19:18:27.562425704 +0000 UTC m=+8482.661399253" watchObservedRunningTime="2026-01-27 19:18:33.212100187 +0000 UTC m=+8488.311073736" Jan 27 19:18:33 crc kubenswrapper[5049]: I0127 19:18:33.660856 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:33 crc kubenswrapper[5049]: I0127 19:18:33.715073 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gx2s"] Jan 27 19:18:35 crc kubenswrapper[5049]: I0127 19:18:35.613817 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6gx2s" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerName="registry-server" containerID="cri-o://2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4" gracePeriod=2 Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.081526 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.134067 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4bbm\" (UniqueName: \"kubernetes.io/projected/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-kube-api-access-w4bbm\") pod \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.134123 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-utilities\") pod \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.134199 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-catalog-content\") pod \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\" (UID: \"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd\") " Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.135199 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-utilities" (OuterVolumeSpecName: "utilities") pod "8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" (UID: "8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.140431 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-kube-api-access-w4bbm" (OuterVolumeSpecName: "kube-api-access-w4bbm") pod "8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" (UID: "8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd"). InnerVolumeSpecName "kube-api-access-w4bbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.156544 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" (UID: "8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.235266 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4bbm\" (UniqueName: \"kubernetes.io/projected/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-kube-api-access-w4bbm\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.235317 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.235330 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.622724 5049 generic.go:334] "Generic (PLEG): container finished" podID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerID="2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4" exitCode=0 Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.622824 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gx2s" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.622834 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gx2s" event={"ID":"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd","Type":"ContainerDied","Data":"2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4"} Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.622911 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gx2s" event={"ID":"8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd","Type":"ContainerDied","Data":"7735de8b9bb7d9ef6a68b2daa458472304afedf8dcea4403fc54fe6362e7e8f7"} Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.622949 5049 scope.go:117] "RemoveContainer" containerID="2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.643434 5049 scope.go:117] "RemoveContainer" containerID="5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.664060 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gx2s"] Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.673195 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gx2s"] Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.674332 5049 scope.go:117] "RemoveContainer" containerID="7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.712217 5049 scope.go:117] "RemoveContainer" containerID="2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4" Jan 27 19:18:36 crc kubenswrapper[5049]: E0127 19:18:36.713014 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4\": container with ID starting with 2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4 not found: ID does not exist" containerID="2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.713047 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4"} err="failed to get container status \"2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4\": rpc error: code = NotFound desc = could not find container \"2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4\": container with ID starting with 2ea9afd4be274ce923d18f52ecaf10c0fb5f446cce34add487f5bab95e9118e4 not found: ID does not exist" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.713074 5049 scope.go:117] "RemoveContainer" containerID="5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4" Jan 27 19:18:36 crc kubenswrapper[5049]: E0127 19:18:36.713389 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4\": container with ID starting with 5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4 not found: ID does not exist" containerID="5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.713411 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4"} err="failed to get container status \"5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4\": rpc error: code = NotFound desc = could not find container \"5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4\": container with ID starting with 5a81409e5c983f011c1c8f6ae1d103fae9c9e9f6013dc32eb4f3aaec9faa74a4 not found: ID does not exist" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.713427 5049 scope.go:117] "RemoveContainer" containerID="7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052" Jan 27 19:18:36 crc kubenswrapper[5049]: E0127 19:18:36.713749 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052\": container with ID starting with 7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052 not found: ID does not exist" containerID="7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052" Jan 27 19:18:36 crc kubenswrapper[5049]: I0127 19:18:36.713799 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052"} err="failed to get container status \"7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052\": rpc error: code = NotFound desc = could not find container \"7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052\": container with ID starting with 7f6aa81c2ba875d8b99a3015696c52a2e887b3a54a653f29239308628de81052 not found: ID does not exist" Jan 27 19:18:37 crc kubenswrapper[5049]: I0127 19:18:37.679547 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" path="/var/lib/kubelet/pods/8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd/volumes" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.089975 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2dzs"] Jan 27 19:19:55 crc kubenswrapper[5049]: E0127 19:19:55.091079 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerName="registry-server" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.091097 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerName="registry-server" Jan 27 19:19:55 crc kubenswrapper[5049]: E0127 19:19:55.091117 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerName="extract-content" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.091125 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerName="extract-content" Jan 27 19:19:55 crc kubenswrapper[5049]: E0127 19:19:55.091158 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerName="extract-utilities" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.091166 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerName="extract-utilities" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.091408 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f1c3c2d-212e-472d-8bb6-a5bdacc91fbd" containerName="registry-server" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.093096 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.097449 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2dzs"] Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.271663 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-catalog-content\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.272026 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-utilities\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.272069 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjmk\" (UniqueName: \"kubernetes.io/projected/adca9989-ee7c-4344-a374-77f93eeab801-kube-api-access-vqjmk\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.373864 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-catalog-content\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.373909 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-utilities\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.373939 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqjmk\" (UniqueName: \"kubernetes.io/projected/adca9989-ee7c-4344-a374-77f93eeab801-kube-api-access-vqjmk\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.374292 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-catalog-content\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.374510 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-utilities\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.396592 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqjmk\" (UniqueName: \"kubernetes.io/projected/adca9989-ee7c-4344-a374-77f93eeab801-kube-api-access-vqjmk\") pod \"redhat-operators-j2dzs\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.414142 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:19:55 crc kubenswrapper[5049]: I0127 19:19:55.876737 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2dzs"] Jan 27 19:19:56 crc kubenswrapper[5049]: I0127 19:19:56.362162 5049 generic.go:334] "Generic (PLEG): container finished" podID="adca9989-ee7c-4344-a374-77f93eeab801" containerID="2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af" exitCode=0 Jan 27 19:19:56 crc kubenswrapper[5049]: I0127 19:19:56.362219 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2dzs" event={"ID":"adca9989-ee7c-4344-a374-77f93eeab801","Type":"ContainerDied","Data":"2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af"} Jan 27 19:19:56 crc kubenswrapper[5049]: I0127 19:19:56.362421 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2dzs" event={"ID":"adca9989-ee7c-4344-a374-77f93eeab801","Type":"ContainerStarted","Data":"b2befe68b22d992fc4899e17520b23073781133ada38159f1710237083b149c6"} Jan 27 19:19:58 crc kubenswrapper[5049]: I0127 19:19:58.379940 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2dzs" event={"ID":"adca9989-ee7c-4344-a374-77f93eeab801","Type":"ContainerStarted","Data":"68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0"} Jan 27 19:19:59 crc kubenswrapper[5049]: I0127 19:19:59.394503 5049 generic.go:334] "Generic (PLEG): container finished" podID="adca9989-ee7c-4344-a374-77f93eeab801" containerID="68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0" exitCode=0 Jan 27 19:19:59 crc kubenswrapper[5049]: I0127 19:19:59.394568 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2dzs" event={"ID":"adca9989-ee7c-4344-a374-77f93eeab801","Type":"ContainerDied","Data":"68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0"} Jan 27 19:20:00 crc kubenswrapper[5049]: I0127 19:20:00.406589 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2dzs" event={"ID":"adca9989-ee7c-4344-a374-77f93eeab801","Type":"ContainerStarted","Data":"6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1"} Jan 27 19:20:05 crc kubenswrapper[5049]: I0127 19:20:05.414325 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:20:05 crc kubenswrapper[5049]: I0127 19:20:05.414928 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:20:06 crc kubenswrapper[5049]: I0127 19:20:06.479620 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j2dzs" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="registry-server" probeResult="failure" output=< Jan 27 19:20:06 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 19:20:06 crc kubenswrapper[5049]: > Jan 27 19:20:15 crc kubenswrapper[5049]: I0127 19:20:15.474500 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:20:15 crc kubenswrapper[5049]: I0127 19:20:15.500894 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2dzs" podStartSLOduration=17.007329396 podStartE2EDuration="20.500878146s" podCreationTimestamp="2026-01-27 19:19:55 +0000 UTC" firstStartedPulling="2026-01-27 19:19:56.36400814 +0000 UTC m=+8571.462981699" lastFinishedPulling="2026-01-27 19:19:59.8575569 +0000 UTC m=+8574.956530449" observedRunningTime="2026-01-27 19:20:00.423364291 +0000 UTC m=+8575.522337880" watchObservedRunningTime="2026-01-27 19:20:15.500878146 +0000 UTC m=+8590.599851695" Jan 27 19:20:15 crc kubenswrapper[5049]: I0127 19:20:15.531763 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.492236 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2dzs"] Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.536350 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j2dzs" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="registry-server" containerID="cri-o://6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1" gracePeriod=2 Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.709922 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f48nb"] Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.714379 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.729938 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f48nb"] Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.829621 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-catalog-content\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.829789 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-utilities\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.829943 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pzn5\" (UniqueName: \"kubernetes.io/projected/a79e6831-a376-487e-8d22-5f5852a5037e-kube-api-access-2pzn5\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.931385 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-utilities\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.932065 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pzn5\" (UniqueName: \"kubernetes.io/projected/a79e6831-a376-487e-8d22-5f5852a5037e-kube-api-access-2pzn5\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.932204 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-utilities\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.936273 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-catalog-content\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.936882 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-catalog-content\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:16 crc kubenswrapper[5049]: I0127 19:20:16.953452 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pzn5\" (UniqueName: \"kubernetes.io/projected/a79e6831-a376-487e-8d22-5f5852a5037e-kube-api-access-2pzn5\") pod \"community-operators-f48nb\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.055836 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.171238 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.241406 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-catalog-content\") pod \"adca9989-ee7c-4344-a374-77f93eeab801\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.241511 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-utilities\") pod \"adca9989-ee7c-4344-a374-77f93eeab801\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.241573 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqjmk\" (UniqueName: \"kubernetes.io/projected/adca9989-ee7c-4344-a374-77f93eeab801-kube-api-access-vqjmk\") pod \"adca9989-ee7c-4344-a374-77f93eeab801\" (UID: \"adca9989-ee7c-4344-a374-77f93eeab801\") " Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.243543 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-utilities" (OuterVolumeSpecName: "utilities") pod "adca9989-ee7c-4344-a374-77f93eeab801" (UID: "adca9989-ee7c-4344-a374-77f93eeab801"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.255631 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adca9989-ee7c-4344-a374-77f93eeab801-kube-api-access-vqjmk" (OuterVolumeSpecName: "kube-api-access-vqjmk") pod "adca9989-ee7c-4344-a374-77f93eeab801" (UID: "adca9989-ee7c-4344-a374-77f93eeab801"). InnerVolumeSpecName "kube-api-access-vqjmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.346331 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqjmk\" (UniqueName: \"kubernetes.io/projected/adca9989-ee7c-4344-a374-77f93eeab801-kube-api-access-vqjmk\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.346370 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.403307 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adca9989-ee7c-4344-a374-77f93eeab801" (UID: "adca9989-ee7c-4344-a374-77f93eeab801"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.449156 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adca9989-ee7c-4344-a374-77f93eeab801-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.548115 5049 generic.go:334] "Generic (PLEG): container finished" podID="adca9989-ee7c-4344-a374-77f93eeab801" containerID="6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1" exitCode=0 Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.548174 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2dzs" event={"ID":"adca9989-ee7c-4344-a374-77f93eeab801","Type":"ContainerDied","Data":"6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1"} Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.548206 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2dzs" event={"ID":"adca9989-ee7c-4344-a374-77f93eeab801","Type":"ContainerDied","Data":"b2befe68b22d992fc4899e17520b23073781133ada38159f1710237083b149c6"} Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.548235 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2dzs" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.548295 5049 scope.go:117] "RemoveContainer" containerID="6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.571174 5049 scope.go:117] "RemoveContainer" containerID="68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.622145 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j2dzs"] Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.629590 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j2dzs"] Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.652941 5049 scope.go:117] "RemoveContainer" containerID="2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.663887 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adca9989-ee7c-4344-a374-77f93eeab801" path="/var/lib/kubelet/pods/adca9989-ee7c-4344-a374-77f93eeab801/volumes" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.664623 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f48nb"] Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.787997 5049 scope.go:117] "RemoveContainer" containerID="6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1" Jan 27 19:20:17 crc kubenswrapper[5049]: E0127 19:20:17.788618 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1\": container with ID starting with 6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1 not found: ID does not exist" containerID="6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.788652 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1"} err="failed to get container status \"6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1\": rpc error: code = NotFound desc = could not find container \"6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1\": container with ID starting with 6fea9f3289755717ee8fd5932a1cc19db31fcaaa168e850f84ecf84af91e8ac1 not found: ID does not exist" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.788685 5049 scope.go:117] "RemoveContainer" containerID="68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0" Jan 27 19:20:17 crc kubenswrapper[5049]: E0127 19:20:17.789008 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0\": container with ID starting with 68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0 not found: ID does not exist" containerID="68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.789107 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0"} err="failed to get container status \"68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0\": rpc error: code = NotFound desc = could not find container \"68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0\": container with ID starting with 68b0e52a13dcb77ff71b0a3f2ef04be4fb44af7ba2fbaee330533f6d0b226bc0 not found: ID does not exist" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.789192 5049 scope.go:117] "RemoveContainer" containerID="2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af" Jan 27 19:20:17 crc kubenswrapper[5049]: E0127 19:20:17.789518 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af\": container with ID starting with 2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af not found: ID does not exist" containerID="2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af" Jan 27 19:20:17 crc kubenswrapper[5049]: I0127 19:20:17.789541 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af"} err="failed to get container status \"2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af\": rpc error: code = NotFound desc = could not find container \"2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af\": container with ID starting with 2ebb7cfd0782f28ee59a736633fed6eee1b9175fd1c579450988a81d000a94af not found: ID does not exist" Jan 27 19:20:18 crc kubenswrapper[5049]: I0127 19:20:18.559459 5049 generic.go:334] "Generic (PLEG): container finished" podID="a79e6831-a376-487e-8d22-5f5852a5037e" containerID="dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce" exitCode=0 Jan 27 19:20:18 crc kubenswrapper[5049]: I0127 19:20:18.559752 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f48nb" event={"ID":"a79e6831-a376-487e-8d22-5f5852a5037e","Type":"ContainerDied","Data":"dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce"} Jan 27 19:20:18 crc kubenswrapper[5049]: I0127 19:20:18.559969 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f48nb" event={"ID":"a79e6831-a376-487e-8d22-5f5852a5037e","Type":"ContainerStarted","Data":"03ccc3b4d2493badcff67040a0bc045c1454008c99b229ca732ef3ab3782b05f"} Jan 27 19:20:19 crc kubenswrapper[5049]: I0127 19:20:19.577441 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f48nb" event={"ID":"a79e6831-a376-487e-8d22-5f5852a5037e","Type":"ContainerStarted","Data":"4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4"} Jan 27 19:20:20 crc kubenswrapper[5049]: I0127 19:20:20.590763 5049 generic.go:334] "Generic (PLEG): container finished" podID="a79e6831-a376-487e-8d22-5f5852a5037e" containerID="4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4" exitCode=0 Jan 27 19:20:20 crc kubenswrapper[5049]: I0127 19:20:20.590831 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f48nb" event={"ID":"a79e6831-a376-487e-8d22-5f5852a5037e","Type":"ContainerDied","Data":"4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4"} Jan 27 19:20:21 crc kubenswrapper[5049]: I0127 19:20:21.601852 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f48nb" event={"ID":"a79e6831-a376-487e-8d22-5f5852a5037e","Type":"ContainerStarted","Data":"9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004"} Jan 27 19:20:21 crc kubenswrapper[5049]: I0127 19:20:21.625561 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f48nb" podStartSLOduration=3.105338179 podStartE2EDuration="5.625532339s" podCreationTimestamp="2026-01-27 19:20:16 +0000 UTC" firstStartedPulling="2026-01-27 19:20:18.562900824 +0000 UTC m=+8593.661874373" lastFinishedPulling="2026-01-27 19:20:21.083094944 +0000 UTC m=+8596.182068533" observedRunningTime="2026-01-27 19:20:21.621179975 +0000 UTC m=+8596.720153554" watchObservedRunningTime="2026-01-27 19:20:21.625532339 +0000 UTC m=+8596.724505918" Jan 27 19:20:27 crc kubenswrapper[5049]: I0127 19:20:27.057755 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:27 crc kubenswrapper[5049]: I0127 19:20:27.058929 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:27 crc kubenswrapper[5049]: I0127 19:20:27.141515 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:27 crc kubenswrapper[5049]: I0127 19:20:27.700177 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:27 crc kubenswrapper[5049]: I0127 19:20:27.756345 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f48nb"] Jan 27 19:20:29 crc kubenswrapper[5049]: I0127 19:20:29.671568 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f48nb" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" containerName="registry-server" containerID="cri-o://9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004" gracePeriod=2 Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.173872 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.311063 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-utilities\") pod \"a79e6831-a376-487e-8d22-5f5852a5037e\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.311232 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pzn5\" (UniqueName: \"kubernetes.io/projected/a79e6831-a376-487e-8d22-5f5852a5037e-kube-api-access-2pzn5\") pod \"a79e6831-a376-487e-8d22-5f5852a5037e\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.311301 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-catalog-content\") pod \"a79e6831-a376-487e-8d22-5f5852a5037e\" (UID: \"a79e6831-a376-487e-8d22-5f5852a5037e\") " Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.311853 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-utilities" (OuterVolumeSpecName: "utilities") pod "a79e6831-a376-487e-8d22-5f5852a5037e" (UID: "a79e6831-a376-487e-8d22-5f5852a5037e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.316850 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79e6831-a376-487e-8d22-5f5852a5037e-kube-api-access-2pzn5" (OuterVolumeSpecName: "kube-api-access-2pzn5") pod "a79e6831-a376-487e-8d22-5f5852a5037e" (UID: "a79e6831-a376-487e-8d22-5f5852a5037e"). InnerVolumeSpecName "kube-api-access-2pzn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.365373 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a79e6831-a376-487e-8d22-5f5852a5037e" (UID: "a79e6831-a376-487e-8d22-5f5852a5037e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.413992 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.414025 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pzn5\" (UniqueName: \"kubernetes.io/projected/a79e6831-a376-487e-8d22-5f5852a5037e-kube-api-access-2pzn5\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.414035 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a79e6831-a376-487e-8d22-5f5852a5037e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.681577 5049 generic.go:334] "Generic (PLEG): container finished" podID="a79e6831-a376-487e-8d22-5f5852a5037e" containerID="9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004" exitCode=0 Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.681903 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f48nb" event={"ID":"a79e6831-a376-487e-8d22-5f5852a5037e","Type":"ContainerDied","Data":"9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004"} Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.681937 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f48nb" event={"ID":"a79e6831-a376-487e-8d22-5f5852a5037e","Type":"ContainerDied","Data":"03ccc3b4d2493badcff67040a0bc045c1454008c99b229ca732ef3ab3782b05f"} Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.681963 5049 scope.go:117] "RemoveContainer" containerID="9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.682152 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f48nb" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.724796 5049 scope.go:117] "RemoveContainer" containerID="4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.729539 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f48nb"] Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.737989 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f48nb"] Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.746215 5049 scope.go:117] "RemoveContainer" containerID="dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.791222 5049 scope.go:117] "RemoveContainer" containerID="9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004" Jan 27 19:20:30 crc kubenswrapper[5049]: E0127 19:20:30.791638 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004\": container with ID starting with 9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004 not found: ID does not exist" containerID="9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.791737 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004"} err="failed to get container status \"9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004\": rpc error: code = NotFound desc = could not find container \"9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004\": container with ID starting with 9cf2ed3709f554999574db2f89199d3e491f4965472422366cf4785f22238004 not found: ID does not exist" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.791765 5049 scope.go:117] "RemoveContainer" containerID="4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4" Jan 27 19:20:30 crc kubenswrapper[5049]: E0127 19:20:30.792157 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4\": container with ID starting with 4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4 not found: ID does not exist" containerID="4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.792192 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4"} err="failed to get container status \"4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4\": rpc error: code = NotFound desc = could not find container \"4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4\": container with ID starting with 4f65fc253aac48fef0c50fd9531562775060d3443e2bff07250d206f4fc852c4 not found: ID does not exist" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.792212 5049 scope.go:117] "RemoveContainer" containerID="dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce" Jan 27 19:20:30 crc kubenswrapper[5049]: E0127 19:20:30.792508 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce\": container with ID starting with dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce not found: ID does not exist" containerID="dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce" Jan 27 19:20:30 crc kubenswrapper[5049]: I0127 19:20:30.792531 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce"} err="failed to get container status \"dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce\": rpc error: code = NotFound desc = could not find container \"dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce\": container with ID starting with dc23f29d2cd29f27eab8357022fa5fe1dba3ce9d23401123748245e018f548ce not found: ID does not exist" Jan 27 19:20:31 crc kubenswrapper[5049]: I0127 19:20:31.659216 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" path="/var/lib/kubelet/pods/a79e6831-a376-487e-8d22-5f5852a5037e/volumes" Jan 27 19:20:47 crc kubenswrapper[5049]: I0127 19:20:47.781518 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:20:47 crc kubenswrapper[5049]: I0127 19:20:47.782047 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:21:17 crc kubenswrapper[5049]: I0127 19:21:17.782262 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:21:17 crc kubenswrapper[5049]: I0127 19:21:17.782942 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:21:47 crc kubenswrapper[5049]: I0127 19:21:47.781844 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:21:47 crc kubenswrapper[5049]: I0127 19:21:47.782582 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:21:47 crc kubenswrapper[5049]: I0127 19:21:47.782648 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:21:47 crc kubenswrapper[5049]: I0127 19:21:47.783640 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe28027e9d5bac15184fb5925884f4ee5b93dff477b09128301855d8ebed5e23"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:21:47 crc kubenswrapper[5049]: I0127 19:21:47.783731 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://fe28027e9d5bac15184fb5925884f4ee5b93dff477b09128301855d8ebed5e23" gracePeriod=600 Jan 27 19:21:48 crc kubenswrapper[5049]: I0127 19:21:48.382480 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="fe28027e9d5bac15184fb5925884f4ee5b93dff477b09128301855d8ebed5e23" exitCode=0 Jan 27 19:21:48 crc kubenswrapper[5049]: I0127 19:21:48.382564 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"fe28027e9d5bac15184fb5925884f4ee5b93dff477b09128301855d8ebed5e23"} Jan 27 19:21:48 crc kubenswrapper[5049]: I0127 19:21:48.382943 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d"} Jan 27 19:21:48 crc kubenswrapper[5049]: I0127 19:21:48.382969 5049 scope.go:117] "RemoveContainer" containerID="3865dc8a4c7383927c01cbaaa486feb9603e01eb13e1bea0c4f6615a379d38bf" Jan 27 19:24:17 crc kubenswrapper[5049]: I0127 19:24:17.781749 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:24:17 crc kubenswrapper[5049]: I0127 19:24:17.782845 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:24:47 crc kubenswrapper[5049]: I0127 19:24:47.781761 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:24:47 crc kubenswrapper[5049]: I0127 19:24:47.782273 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:25:17 crc kubenswrapper[5049]: I0127 19:25:17.781491 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:25:17 crc kubenswrapper[5049]: I0127 19:25:17.782101 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:25:17 crc kubenswrapper[5049]: I0127 19:25:17.782151 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:25:17 crc kubenswrapper[5049]: I0127 19:25:17.782886 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:25:17 crc kubenswrapper[5049]: I0127 19:25:17.782950 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" gracePeriod=600 Jan 27 19:25:17 crc kubenswrapper[5049]: E0127 19:25:17.978792 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:25:18 crc kubenswrapper[5049]: I0127 19:25:18.419778 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" exitCode=0 Jan 27 19:25:18 crc kubenswrapper[5049]: I0127 19:25:18.419818 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d"} Jan 27 19:25:18 crc kubenswrapper[5049]: I0127 19:25:18.419847 5049 scope.go:117] "RemoveContainer" containerID="fe28027e9d5bac15184fb5925884f4ee5b93dff477b09128301855d8ebed5e23" Jan 27 19:25:18 crc kubenswrapper[5049]: I0127 19:25:18.420606 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:25:18 crc kubenswrapper[5049]: E0127 19:25:18.420959 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:25:31 crc kubenswrapper[5049]: I0127 19:25:31.646600 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:25:31 crc kubenswrapper[5049]: E0127 19:25:31.647390 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:25:43 crc kubenswrapper[5049]: I0127 19:25:43.646511 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:25:43 crc kubenswrapper[5049]: E0127 19:25:43.647411 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:25:55 crc kubenswrapper[5049]: I0127 19:25:55.653710 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:25:55 crc kubenswrapper[5049]: E0127 19:25:55.654565 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:26:08 crc kubenswrapper[5049]: I0127 19:26:08.646838 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:26:08 crc kubenswrapper[5049]: E0127 19:26:08.647750 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:26:20 crc kubenswrapper[5049]: I0127 19:26:20.646261 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:26:20 crc kubenswrapper[5049]: E0127 19:26:20.647046 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:26:33 crc kubenswrapper[5049]: I0127 19:26:33.645504 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:26:33 crc kubenswrapper[5049]: E0127 19:26:33.646396 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:26:48 crc kubenswrapper[5049]: I0127 19:26:48.646076 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:26:48 crc kubenswrapper[5049]: E0127 19:26:48.647087 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:26:59 crc kubenswrapper[5049]: I0127 19:26:59.647910 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:26:59 crc kubenswrapper[5049]: E0127 19:26:59.648650 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:27:11 crc kubenswrapper[5049]: I0127 19:27:11.646766 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:27:11 crc kubenswrapper[5049]: E0127 19:27:11.647717 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:27:23 crc kubenswrapper[5049]: I0127 19:27:23.647181 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:27:23 crc kubenswrapper[5049]: E0127 19:27:23.655593 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:27:36 crc kubenswrapper[5049]: I0127 19:27:36.645998 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:27:36 crc kubenswrapper[5049]: E0127 19:27:36.646965 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:27:51 crc kubenswrapper[5049]: I0127 19:27:51.646402 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:27:51 crc kubenswrapper[5049]: E0127 19:27:51.647158 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:28:03 crc kubenswrapper[5049]: I0127 19:28:03.646355 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:28:03 crc kubenswrapper[5049]: E0127 19:28:03.647453 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:28:16 crc kubenswrapper[5049]: I0127 19:28:16.646598 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:28:16 crc kubenswrapper[5049]: E0127 19:28:16.647364 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:28:30 crc kubenswrapper[5049]: I0127 19:28:30.646737 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:28:30 crc kubenswrapper[5049]: E0127 19:28:30.647804 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.783942 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tn9zz"] Jan 27 19:28:35 crc kubenswrapper[5049]: E0127 19:28:35.785147 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" containerName="extract-utilities" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.785168 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" containerName="extract-utilities" Jan 27 19:28:35 crc kubenswrapper[5049]: E0127 19:28:35.785206 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" containerName="extract-content" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.785216 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" containerName="extract-content" Jan 27 19:28:35 crc kubenswrapper[5049]: E0127 19:28:35.785248 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="extract-utilities" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.785259 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="extract-utilities" Jan 27 19:28:35 crc kubenswrapper[5049]: E0127 19:28:35.785271 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="registry-server" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.785278 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="registry-server" Jan 27 19:28:35 crc kubenswrapper[5049]: E0127 19:28:35.785320 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="extract-content" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.785330 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="extract-content" Jan 27 19:28:35 crc kubenswrapper[5049]: E0127 19:28:35.785346 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" containerName="registry-server" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.785355 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" containerName="registry-server" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.785611 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="adca9989-ee7c-4344-a374-77f93eeab801" containerName="registry-server" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.785632 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79e6831-a376-487e-8d22-5f5852a5037e" containerName="registry-server" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.787557 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.790646 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-utilities\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.791117 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-catalog-content\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.791216 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxz84\" (UniqueName: \"kubernetes.io/projected/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-kube-api-access-sxz84\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.797858 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tn9zz"] Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.893613 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-utilities\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.893771 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-catalog-content\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.893798 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxz84\" (UniqueName: \"kubernetes.io/projected/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-kube-api-access-sxz84\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.894111 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-utilities\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.894331 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-catalog-content\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:35 crc kubenswrapper[5049]: I0127 19:28:35.923171 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxz84\" (UniqueName: \"kubernetes.io/projected/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-kube-api-access-sxz84\") pod \"certified-operators-tn9zz\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:36 crc kubenswrapper[5049]: I0127 19:28:36.110644 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:36 crc kubenswrapper[5049]: I0127 19:28:36.630113 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tn9zz"] Jan 27 19:28:37 crc kubenswrapper[5049]: I0127 19:28:37.203648 5049 generic.go:334] "Generic (PLEG): container finished" podID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerID="abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda" exitCode=0 Jan 27 19:28:37 crc kubenswrapper[5049]: I0127 19:28:37.203708 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tn9zz" event={"ID":"d30bade7-9bc5-4cef-9ce3-b42e56beeab9","Type":"ContainerDied","Data":"abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda"} Jan 27 19:28:37 crc kubenswrapper[5049]: I0127 19:28:37.203990 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tn9zz" event={"ID":"d30bade7-9bc5-4cef-9ce3-b42e56beeab9","Type":"ContainerStarted","Data":"a4c12fa7bc51d1df57eca4c4e06ef3dc7f1abe357edb3cba581a6736ddb63390"} Jan 27 19:28:37 crc kubenswrapper[5049]: I0127 19:28:37.206600 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:28:39 crc kubenswrapper[5049]: I0127 19:28:39.223206 5049 generic.go:334] "Generic (PLEG): container finished" podID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerID="129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107" exitCode=0 Jan 27 19:28:39 crc kubenswrapper[5049]: I0127 19:28:39.223304 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tn9zz" event={"ID":"d30bade7-9bc5-4cef-9ce3-b42e56beeab9","Type":"ContainerDied","Data":"129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107"} Jan 27 19:28:40 crc kubenswrapper[5049]: I0127 19:28:40.239387 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tn9zz" event={"ID":"d30bade7-9bc5-4cef-9ce3-b42e56beeab9","Type":"ContainerStarted","Data":"3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605"} Jan 27 19:28:40 crc kubenswrapper[5049]: I0127 19:28:40.266641 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tn9zz" podStartSLOduration=2.516154604 podStartE2EDuration="5.266607626s" podCreationTimestamp="2026-01-27 19:28:35 +0000 UTC" firstStartedPulling="2026-01-27 19:28:37.206315715 +0000 UTC m=+9092.305289264" lastFinishedPulling="2026-01-27 19:28:39.956768727 +0000 UTC m=+9095.055742286" observedRunningTime="2026-01-27 19:28:40.257376732 +0000 UTC m=+9095.356350291" watchObservedRunningTime="2026-01-27 19:28:40.266607626 +0000 UTC m=+9095.365581165" Jan 27 19:28:42 crc kubenswrapper[5049]: I0127 19:28:42.646315 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:28:42 crc kubenswrapper[5049]: E0127 19:28:42.647438 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:28:46 crc kubenswrapper[5049]: I0127 19:28:46.111363 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:46 crc kubenswrapper[5049]: I0127 19:28:46.112033 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:46 crc kubenswrapper[5049]: I0127 19:28:46.163159 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:46 crc kubenswrapper[5049]: I0127 19:28:46.359031 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:46 crc kubenswrapper[5049]: I0127 19:28:46.428488 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tn9zz"] Jan 27 19:28:48 crc kubenswrapper[5049]: I0127 19:28:48.325283 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tn9zz" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerName="registry-server" containerID="cri-o://3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605" gracePeriod=2 Jan 27 19:28:48 crc kubenswrapper[5049]: I0127 19:28:48.819812 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:48 crc kubenswrapper[5049]: I0127 19:28:48.959228 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-utilities\") pod \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " Jan 27 19:28:48 crc kubenswrapper[5049]: I0127 19:28:48.959306 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-catalog-content\") pod \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " Jan 27 19:28:48 crc kubenswrapper[5049]: I0127 19:28:48.959896 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxz84\" (UniqueName: \"kubernetes.io/projected/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-kube-api-access-sxz84\") pod \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\" (UID: \"d30bade7-9bc5-4cef-9ce3-b42e56beeab9\") " Jan 27 19:28:48 crc kubenswrapper[5049]: I0127 19:28:48.960183 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-utilities" (OuterVolumeSpecName: "utilities") pod "d30bade7-9bc5-4cef-9ce3-b42e56beeab9" (UID: "d30bade7-9bc5-4cef-9ce3-b42e56beeab9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:28:48 crc kubenswrapper[5049]: I0127 19:28:48.960723 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:28:48 crc kubenswrapper[5049]: I0127 19:28:48.967015 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-kube-api-access-sxz84" (OuterVolumeSpecName: "kube-api-access-sxz84") pod "d30bade7-9bc5-4cef-9ce3-b42e56beeab9" (UID: "d30bade7-9bc5-4cef-9ce3-b42e56beeab9"). InnerVolumeSpecName "kube-api-access-sxz84". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.021241 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d30bade7-9bc5-4cef-9ce3-b42e56beeab9" (UID: "d30bade7-9bc5-4cef-9ce3-b42e56beeab9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.062351 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.062398 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxz84\" (UniqueName: \"kubernetes.io/projected/d30bade7-9bc5-4cef-9ce3-b42e56beeab9-kube-api-access-sxz84\") on node \"crc\" DevicePath \"\"" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.336292 5049 generic.go:334] "Generic (PLEG): container finished" podID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerID="3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605" exitCode=0 Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.336356 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tn9zz" event={"ID":"d30bade7-9bc5-4cef-9ce3-b42e56beeab9","Type":"ContainerDied","Data":"3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605"} Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.336370 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tn9zz" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.336395 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tn9zz" event={"ID":"d30bade7-9bc5-4cef-9ce3-b42e56beeab9","Type":"ContainerDied","Data":"a4c12fa7bc51d1df57eca4c4e06ef3dc7f1abe357edb3cba581a6736ddb63390"} Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.336418 5049 scope.go:117] "RemoveContainer" containerID="3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.366210 5049 scope.go:117] "RemoveContainer" containerID="129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.389784 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tn9zz"] Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.400190 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tn9zz"] Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.410911 5049 scope.go:117] "RemoveContainer" containerID="abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.449111 5049 scope.go:117] "RemoveContainer" containerID="3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605" Jan 27 19:28:49 crc kubenswrapper[5049]: E0127 19:28:49.449576 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605\": container with ID starting with 3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605 not found: ID does not exist" containerID="3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.449608 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605"} err="failed to get container status \"3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605\": rpc error: code = NotFound desc = could not find container \"3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605\": container with ID starting with 3a5dac6076862c890f32b2a181cd49251020a10da571a6e46e2e0430a8833605 not found: ID does not exist" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.449630 5049 scope.go:117] "RemoveContainer" containerID="129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107" Jan 27 19:28:49 crc kubenswrapper[5049]: E0127 19:28:49.450047 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107\": container with ID starting with 129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107 not found: ID does not exist" containerID="129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.450102 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107"} err="failed to get container status \"129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107\": rpc error: code = NotFound desc = could not find container \"129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107\": container with ID starting with 129f65e044f3f9cc85032e5ff53ba8b720f1b6d37656919e294c5d5709ffd107 not found: ID does not exist" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.450136 5049 scope.go:117] "RemoveContainer" containerID="abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda" Jan 27 19:28:49 crc kubenswrapper[5049]: E0127 19:28:49.450448 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda\": container with ID starting with abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda not found: ID does not exist" containerID="abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.450477 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda"} err="failed to get container status \"abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda\": rpc error: code = NotFound desc = could not find container \"abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda\": container with ID starting with abe4d7f39b70d11e1f75b54da96493337c8d2bfa88720261a4566e302dbe4dda not found: ID does not exist" Jan 27 19:28:49 crc kubenswrapper[5049]: I0127 19:28:49.657387 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" path="/var/lib/kubelet/pods/d30bade7-9bc5-4cef-9ce3-b42e56beeab9/volumes" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.707447 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p5jlv"] Jan 27 19:28:54 crc kubenswrapper[5049]: E0127 19:28:54.709144 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerName="extract-content" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.709179 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerName="extract-content" Jan 27 19:28:54 crc kubenswrapper[5049]: E0127 19:28:54.709219 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerName="registry-server" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.709238 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerName="registry-server" Jan 27 19:28:54 crc kubenswrapper[5049]: E0127 19:28:54.709311 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerName="extract-utilities" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.709330 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerName="extract-utilities" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.709849 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30bade7-9bc5-4cef-9ce3-b42e56beeab9" containerName="registry-server" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.713939 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.716457 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5jlv"] Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.777251 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-catalog-content\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.777313 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skvd7\" (UniqueName: \"kubernetes.io/projected/7ee5b875-4158-4b6a-aa7e-e2551e32191d-kube-api-access-skvd7\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.777398 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-utilities\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.879696 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-catalog-content\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.879799 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skvd7\" (UniqueName: \"kubernetes.io/projected/7ee5b875-4158-4b6a-aa7e-e2551e32191d-kube-api-access-skvd7\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.879890 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-utilities\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.880292 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-catalog-content\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.880337 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-utilities\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:54 crc kubenswrapper[5049]: I0127 19:28:54.904847 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skvd7\" (UniqueName: \"kubernetes.io/projected/7ee5b875-4158-4b6a-aa7e-e2551e32191d-kube-api-access-skvd7\") pod \"redhat-marketplace-p5jlv\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:55 crc kubenswrapper[5049]: I0127 19:28:55.050407 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:28:55 crc kubenswrapper[5049]: I0127 19:28:55.555558 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5jlv"] Jan 27 19:28:56 crc kubenswrapper[5049]: I0127 19:28:56.402927 5049 generic.go:334] "Generic (PLEG): container finished" podID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerID="a3d94e7be0d002c7302d4c53ed996afba5f46db4ec9b7ca3f6104fd47ff0af16" exitCode=0 Jan 27 19:28:56 crc kubenswrapper[5049]: I0127 19:28:56.402985 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5jlv" event={"ID":"7ee5b875-4158-4b6a-aa7e-e2551e32191d","Type":"ContainerDied","Data":"a3d94e7be0d002c7302d4c53ed996afba5f46db4ec9b7ca3f6104fd47ff0af16"} Jan 27 19:28:56 crc kubenswrapper[5049]: I0127 19:28:56.403265 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5jlv" event={"ID":"7ee5b875-4158-4b6a-aa7e-e2551e32191d","Type":"ContainerStarted","Data":"ac67c87d8164ff1f688de6a69f9eec1fa2ba3fe4f92889ad5b678afbe8d69dc6"} Jan 27 19:28:56 crc kubenswrapper[5049]: I0127 19:28:56.647297 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:28:56 crc kubenswrapper[5049]: E0127 19:28:56.647799 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:28:58 crc kubenswrapper[5049]: I0127 19:28:58.421353 5049 generic.go:334] "Generic (PLEG): container finished" podID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerID="6b78fe2987883adaabcce696d65e411f229e74cafad8091d819f29ac1c495932" exitCode=0 Jan 27 19:28:58 crc kubenswrapper[5049]: I0127 19:28:58.421459 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5jlv" event={"ID":"7ee5b875-4158-4b6a-aa7e-e2551e32191d","Type":"ContainerDied","Data":"6b78fe2987883adaabcce696d65e411f229e74cafad8091d819f29ac1c495932"} Jan 27 19:28:58 crc kubenswrapper[5049]: E0127 19:28:58.528255 5049 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ee5b875_4158_4b6a_aa7e_e2551e32191d.slice/crio-conmon-6b78fe2987883adaabcce696d65e411f229e74cafad8091d819f29ac1c495932.scope\": RecentStats: unable to find data in memory cache]" Jan 27 19:28:59 crc kubenswrapper[5049]: I0127 19:28:59.443464 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5jlv" event={"ID":"7ee5b875-4158-4b6a-aa7e-e2551e32191d","Type":"ContainerStarted","Data":"396c93872b778e186287ab0215bb9a6e7b2a2d225515c44a07a7ad26e2bf7d21"} Jan 27 19:28:59 crc kubenswrapper[5049]: I0127 19:28:59.466529 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p5jlv" podStartSLOduration=3.051982866 podStartE2EDuration="5.466508354s" podCreationTimestamp="2026-01-27 19:28:54 +0000 UTC" firstStartedPulling="2026-01-27 19:28:56.405425361 +0000 UTC m=+9111.504398920" lastFinishedPulling="2026-01-27 19:28:58.819950859 +0000 UTC m=+9113.918924408" observedRunningTime="2026-01-27 19:28:59.461271444 +0000 UTC m=+9114.560245003" watchObservedRunningTime="2026-01-27 19:28:59.466508354 +0000 UTC m=+9114.565481903" Jan 27 19:29:05 crc kubenswrapper[5049]: I0127 19:29:05.051259 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:29:05 crc kubenswrapper[5049]: I0127 19:29:05.052952 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:29:05 crc kubenswrapper[5049]: I0127 19:29:05.101092 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:29:05 crc kubenswrapper[5049]: I0127 19:29:05.574461 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:29:05 crc kubenswrapper[5049]: I0127 19:29:05.639782 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5jlv"] Jan 27 19:29:07 crc kubenswrapper[5049]: I0127 19:29:07.540235 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p5jlv" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerName="registry-server" containerID="cri-o://396c93872b778e186287ab0215bb9a6e7b2a2d225515c44a07a7ad26e2bf7d21" gracePeriod=2 Jan 27 19:29:08 crc kubenswrapper[5049]: I0127 19:29:08.552049 5049 generic.go:334] "Generic (PLEG): container finished" podID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerID="396c93872b778e186287ab0215bb9a6e7b2a2d225515c44a07a7ad26e2bf7d21" exitCode=0 Jan 27 19:29:08 crc kubenswrapper[5049]: I0127 19:29:08.552860 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5jlv" event={"ID":"7ee5b875-4158-4b6a-aa7e-e2551e32191d","Type":"ContainerDied","Data":"396c93872b778e186287ab0215bb9a6e7b2a2d225515c44a07a7ad26e2bf7d21"} Jan 27 19:29:08 crc kubenswrapper[5049]: I0127 19:29:08.954487 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.020853 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-utilities\") pod \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.020940 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-catalog-content\") pod \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.021184 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skvd7\" (UniqueName: \"kubernetes.io/projected/7ee5b875-4158-4b6a-aa7e-e2551e32191d-kube-api-access-skvd7\") pod \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\" (UID: \"7ee5b875-4158-4b6a-aa7e-e2551e32191d\") " Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.021654 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-utilities" (OuterVolumeSpecName: "utilities") pod "7ee5b875-4158-4b6a-aa7e-e2551e32191d" (UID: "7ee5b875-4158-4b6a-aa7e-e2551e32191d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.028470 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee5b875-4158-4b6a-aa7e-e2551e32191d-kube-api-access-skvd7" (OuterVolumeSpecName: "kube-api-access-skvd7") pod "7ee5b875-4158-4b6a-aa7e-e2551e32191d" (UID: "7ee5b875-4158-4b6a-aa7e-e2551e32191d"). InnerVolumeSpecName "kube-api-access-skvd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.046799 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ee5b875-4158-4b6a-aa7e-e2551e32191d" (UID: "7ee5b875-4158-4b6a-aa7e-e2551e32191d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.123618 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skvd7\" (UniqueName: \"kubernetes.io/projected/7ee5b875-4158-4b6a-aa7e-e2551e32191d-kube-api-access-skvd7\") on node \"crc\" DevicePath \"\"" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.123995 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.124088 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ee5b875-4158-4b6a-aa7e-e2551e32191d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.565901 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5jlv" event={"ID":"7ee5b875-4158-4b6a-aa7e-e2551e32191d","Type":"ContainerDied","Data":"ac67c87d8164ff1f688de6a69f9eec1fa2ba3fe4f92889ad5b678afbe8d69dc6"} Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.565978 5049 scope.go:117] "RemoveContainer" containerID="396c93872b778e186287ab0215bb9a6e7b2a2d225515c44a07a7ad26e2bf7d21" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.565991 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5jlv" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.603977 5049 scope.go:117] "RemoveContainer" containerID="6b78fe2987883adaabcce696d65e411f229e74cafad8091d819f29ac1c495932" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.609257 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5jlv"] Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.618279 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5jlv"] Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.636155 5049 scope.go:117] "RemoveContainer" containerID="a3d94e7be0d002c7302d4c53ed996afba5f46db4ec9b7ca3f6104fd47ff0af16" Jan 27 19:29:09 crc kubenswrapper[5049]: I0127 19:29:09.658547 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" path="/var/lib/kubelet/pods/7ee5b875-4158-4b6a-aa7e-e2551e32191d/volumes" Jan 27 19:29:11 crc kubenswrapper[5049]: I0127 19:29:11.647478 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:29:11 crc kubenswrapper[5049]: E0127 19:29:11.648286 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:29:22 crc kubenswrapper[5049]: I0127 19:29:22.646258 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:29:22 crc kubenswrapper[5049]: E0127 19:29:22.646979 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:29:35 crc kubenswrapper[5049]: I0127 19:29:35.658706 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:29:35 crc kubenswrapper[5049]: E0127 19:29:35.659505 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:29:46 crc kubenswrapper[5049]: I0127 19:29:46.646183 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:29:46 crc kubenswrapper[5049]: E0127 19:29:46.647175 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:29:57 crc kubenswrapper[5049]: I0127 19:29:57.647100 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:29:57 crc kubenswrapper[5049]: E0127 19:29:57.647868 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.144753 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s"] Jan 27 19:30:00 crc kubenswrapper[5049]: E0127 19:30:00.145700 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerName="extract-content" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.145720 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerName="extract-content" Jan 27 19:30:00 crc kubenswrapper[5049]: E0127 19:30:00.145740 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerName="registry-server" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.145747 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerName="registry-server" Jan 27 19:30:00 crc kubenswrapper[5049]: E0127 19:30:00.145758 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerName="extract-utilities" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.145767 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerName="extract-utilities" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.146026 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee5b875-4158-4b6a-aa7e-e2551e32191d" containerName="registry-server" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.146919 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.151065 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.151115 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.154878 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s"] Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.222797 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96pn7\" (UniqueName: \"kubernetes.io/projected/f3080871-429b-4dfd-9061-55453a88bb11-kube-api-access-96pn7\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.223119 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3080871-429b-4dfd-9061-55453a88bb11-config-volume\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.223254 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3080871-429b-4dfd-9061-55453a88bb11-secret-volume\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.325347 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3080871-429b-4dfd-9061-55453a88bb11-config-volume\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.325443 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3080871-429b-4dfd-9061-55453a88bb11-secret-volume\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.325695 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96pn7\" (UniqueName: \"kubernetes.io/projected/f3080871-429b-4dfd-9061-55453a88bb11-kube-api-access-96pn7\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.326408 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3080871-429b-4dfd-9061-55453a88bb11-config-volume\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.335820 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3080871-429b-4dfd-9061-55453a88bb11-secret-volume\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.351345 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96pn7\" (UniqueName: \"kubernetes.io/projected/f3080871-429b-4dfd-9061-55453a88bb11-kube-api-access-96pn7\") pod \"collect-profiles-29492370-whp8s\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.479776 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:00 crc kubenswrapper[5049]: I0127 19:30:00.953114 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s"] Jan 27 19:30:01 crc kubenswrapper[5049]: I0127 19:30:01.042900 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" event={"ID":"f3080871-429b-4dfd-9061-55453a88bb11","Type":"ContainerStarted","Data":"74b811ec0b51802fe3676407c5071c6e510bffdd91d9a8269364f1479c831d7f"} Jan 27 19:30:02 crc kubenswrapper[5049]: I0127 19:30:02.052877 5049 generic.go:334] "Generic (PLEG): container finished" podID="f3080871-429b-4dfd-9061-55453a88bb11" containerID="32cc7d5eaae7952f4d4b5f36d6358fa7ecef26e29b1bc8515ac414158a415cc4" exitCode=0 Jan 27 19:30:02 crc kubenswrapper[5049]: I0127 19:30:02.052996 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" event={"ID":"f3080871-429b-4dfd-9061-55453a88bb11","Type":"ContainerDied","Data":"32cc7d5eaae7952f4d4b5f36d6358fa7ecef26e29b1bc8515ac414158a415cc4"} Jan 27 19:30:03 crc kubenswrapper[5049]: I0127 19:30:03.388572 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:03 crc kubenswrapper[5049]: I0127 19:30:03.489281 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3080871-429b-4dfd-9061-55453a88bb11-config-volume\") pod \"f3080871-429b-4dfd-9061-55453a88bb11\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " Jan 27 19:30:03 crc kubenswrapper[5049]: I0127 19:30:03.489572 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3080871-429b-4dfd-9061-55453a88bb11-secret-volume\") pod \"f3080871-429b-4dfd-9061-55453a88bb11\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " Jan 27 19:30:03 crc kubenswrapper[5049]: I0127 19:30:03.489812 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96pn7\" (UniqueName: \"kubernetes.io/projected/f3080871-429b-4dfd-9061-55453a88bb11-kube-api-access-96pn7\") pod \"f3080871-429b-4dfd-9061-55453a88bb11\" (UID: \"f3080871-429b-4dfd-9061-55453a88bb11\") " Jan 27 19:30:03 crc kubenswrapper[5049]: I0127 19:30:03.490143 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3080871-429b-4dfd-9061-55453a88bb11-config-volume" (OuterVolumeSpecName: "config-volume") pod "f3080871-429b-4dfd-9061-55453a88bb11" (UID: "f3080871-429b-4dfd-9061-55453a88bb11"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:30:03 crc kubenswrapper[5049]: I0127 19:30:03.490545 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3080871-429b-4dfd-9061-55453a88bb11-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:03 crc kubenswrapper[5049]: I0127 19:30:03.960401 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3080871-429b-4dfd-9061-55453a88bb11-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f3080871-429b-4dfd-9061-55453a88bb11" (UID: "f3080871-429b-4dfd-9061-55453a88bb11"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:30:03 crc kubenswrapper[5049]: I0127 19:30:03.960879 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3080871-429b-4dfd-9061-55453a88bb11-kube-api-access-96pn7" (OuterVolumeSpecName: "kube-api-access-96pn7") pod "f3080871-429b-4dfd-9061-55453a88bb11" (UID: "f3080871-429b-4dfd-9061-55453a88bb11"). InnerVolumeSpecName "kube-api-access-96pn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:30:04 crc kubenswrapper[5049]: I0127 19:30:04.001471 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f3080871-429b-4dfd-9061-55453a88bb11-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:04 crc kubenswrapper[5049]: I0127 19:30:04.001523 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96pn7\" (UniqueName: \"kubernetes.io/projected/f3080871-429b-4dfd-9061-55453a88bb11-kube-api-access-96pn7\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:04 crc kubenswrapper[5049]: I0127 19:30:04.073091 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" event={"ID":"f3080871-429b-4dfd-9061-55453a88bb11","Type":"ContainerDied","Data":"74b811ec0b51802fe3676407c5071c6e510bffdd91d9a8269364f1479c831d7f"} Jan 27 19:30:04 crc kubenswrapper[5049]: I0127 19:30:04.073135 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74b811ec0b51802fe3676407c5071c6e510bffdd91d9a8269364f1479c831d7f" Jan 27 19:30:04 crc kubenswrapper[5049]: I0127 19:30:04.073195 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492370-whp8s" Jan 27 19:30:04 crc kubenswrapper[5049]: I0127 19:30:04.458944 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx"] Jan 27 19:30:04 crc kubenswrapper[5049]: I0127 19:30:04.466875 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492325-xrxdx"] Jan 27 19:30:05 crc kubenswrapper[5049]: I0127 19:30:05.658939 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6070e364-ce14-4426-9a5d-29e7fe1c4d5d" path="/var/lib/kubelet/pods/6070e364-ce14-4426-9a5d-29e7fe1c4d5d/volumes" Jan 27 19:30:12 crc kubenswrapper[5049]: I0127 19:30:12.646943 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:30:12 crc kubenswrapper[5049]: E0127 19:30:12.647668 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:30:21 crc kubenswrapper[5049]: I0127 19:30:21.003285 5049 scope.go:117] "RemoveContainer" containerID="7e6dc244639ec28c3a3e385a107ca03985d591dda3d4272317b0a32ae0d375ca" Jan 27 19:30:26 crc kubenswrapper[5049]: I0127 19:30:26.647711 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:30:27 crc kubenswrapper[5049]: I0127 19:30:27.305648 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"fba46313af7925fc1e3bbc404383696fb33297e02b77b2e53ba038380beb78cb"} Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.393517 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4tfsc"] Jan 27 19:30:40 crc kubenswrapper[5049]: E0127 19:30:40.394550 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3080871-429b-4dfd-9061-55453a88bb11" containerName="collect-profiles" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.394566 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3080871-429b-4dfd-9061-55453a88bb11" containerName="collect-profiles" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.394831 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3080871-429b-4dfd-9061-55453a88bb11" containerName="collect-profiles" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.396395 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.432367 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4tfsc"] Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.528640 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-catalog-content\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.528845 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46pjk\" (UniqueName: \"kubernetes.io/projected/45b4ac21-f124-42af-863d-f96317eed174-kube-api-access-46pjk\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.529046 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-utilities\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.630626 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-utilities\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.630880 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-catalog-content\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.631010 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46pjk\" (UniqueName: \"kubernetes.io/projected/45b4ac21-f124-42af-863d-f96317eed174-kube-api-access-46pjk\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.631596 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-utilities\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.631641 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-catalog-content\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.660115 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46pjk\" (UniqueName: \"kubernetes.io/projected/45b4ac21-f124-42af-863d-f96317eed174-kube-api-access-46pjk\") pod \"community-operators-4tfsc\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:40 crc kubenswrapper[5049]: I0127 19:30:40.741388 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:41 crc kubenswrapper[5049]: I0127 19:30:41.339280 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4tfsc"] Jan 27 19:30:41 crc kubenswrapper[5049]: W0127 19:30:41.664496 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45b4ac21_f124_42af_863d_f96317eed174.slice/crio-0c2aadd96889f0cb140729c5ab4aa009135c483b63fcf202fcf34ad012d4dfeb WatchSource:0}: Error finding container 0c2aadd96889f0cb140729c5ab4aa009135c483b63fcf202fcf34ad012d4dfeb: Status 404 returned error can't find the container with id 0c2aadd96889f0cb140729c5ab4aa009135c483b63fcf202fcf34ad012d4dfeb Jan 27 19:30:42 crc kubenswrapper[5049]: I0127 19:30:42.466444 5049 generic.go:334] "Generic (PLEG): container finished" podID="45b4ac21-f124-42af-863d-f96317eed174" containerID="f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35" exitCode=0 Jan 27 19:30:42 crc kubenswrapper[5049]: I0127 19:30:42.466513 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4tfsc" event={"ID":"45b4ac21-f124-42af-863d-f96317eed174","Type":"ContainerDied","Data":"f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35"} Jan 27 19:30:42 crc kubenswrapper[5049]: I0127 19:30:42.467173 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4tfsc" event={"ID":"45b4ac21-f124-42af-863d-f96317eed174","Type":"ContainerStarted","Data":"0c2aadd96889f0cb140729c5ab4aa009135c483b63fcf202fcf34ad012d4dfeb"} Jan 27 19:30:44 crc kubenswrapper[5049]: I0127 19:30:44.483383 5049 generic.go:334] "Generic (PLEG): container finished" podID="45b4ac21-f124-42af-863d-f96317eed174" containerID="522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548" exitCode=0 Jan 27 19:30:44 crc kubenswrapper[5049]: I0127 19:30:44.483619 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4tfsc" event={"ID":"45b4ac21-f124-42af-863d-f96317eed174","Type":"ContainerDied","Data":"522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548"} Jan 27 19:30:45 crc kubenswrapper[5049]: I0127 19:30:45.507058 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4tfsc" event={"ID":"45b4ac21-f124-42af-863d-f96317eed174","Type":"ContainerStarted","Data":"9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3"} Jan 27 19:30:45 crc kubenswrapper[5049]: I0127 19:30:45.529582 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4tfsc" podStartSLOduration=2.993687747 podStartE2EDuration="5.529560632s" podCreationTimestamp="2026-01-27 19:30:40 +0000 UTC" firstStartedPulling="2026-01-27 19:30:42.468553789 +0000 UTC m=+9217.567527348" lastFinishedPulling="2026-01-27 19:30:45.004426684 +0000 UTC m=+9220.103400233" observedRunningTime="2026-01-27 19:30:45.525786645 +0000 UTC m=+9220.624760204" watchObservedRunningTime="2026-01-27 19:30:45.529560632 +0000 UTC m=+9220.628534191" Jan 27 19:30:50 crc kubenswrapper[5049]: I0127 19:30:50.741873 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:50 crc kubenswrapper[5049]: I0127 19:30:50.743319 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:51 crc kubenswrapper[5049]: I0127 19:30:51.321889 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:51 crc kubenswrapper[5049]: I0127 19:30:51.620404 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:51 crc kubenswrapper[5049]: I0127 19:30:51.684292 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4tfsc"] Jan 27 19:30:53 crc kubenswrapper[5049]: I0127 19:30:53.579896 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4tfsc" podUID="45b4ac21-f124-42af-863d-f96317eed174" containerName="registry-server" containerID="cri-o://9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3" gracePeriod=2 Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.040432 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.153616 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-catalog-content\") pod \"45b4ac21-f124-42af-863d-f96317eed174\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.153836 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-utilities\") pod \"45b4ac21-f124-42af-863d-f96317eed174\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.154052 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46pjk\" (UniqueName: \"kubernetes.io/projected/45b4ac21-f124-42af-863d-f96317eed174-kube-api-access-46pjk\") pod \"45b4ac21-f124-42af-863d-f96317eed174\" (UID: \"45b4ac21-f124-42af-863d-f96317eed174\") " Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.155233 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-utilities" (OuterVolumeSpecName: "utilities") pod "45b4ac21-f124-42af-863d-f96317eed174" (UID: "45b4ac21-f124-42af-863d-f96317eed174"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.155483 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.170043 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b4ac21-f124-42af-863d-f96317eed174-kube-api-access-46pjk" (OuterVolumeSpecName: "kube-api-access-46pjk") pod "45b4ac21-f124-42af-863d-f96317eed174" (UID: "45b4ac21-f124-42af-863d-f96317eed174"). InnerVolumeSpecName "kube-api-access-46pjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.211447 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45b4ac21-f124-42af-863d-f96317eed174" (UID: "45b4ac21-f124-42af-863d-f96317eed174"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.258976 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46pjk\" (UniqueName: \"kubernetes.io/projected/45b4ac21-f124-42af-863d-f96317eed174-kube-api-access-46pjk\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.259053 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45b4ac21-f124-42af-863d-f96317eed174-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.590925 5049 generic.go:334] "Generic (PLEG): container finished" podID="45b4ac21-f124-42af-863d-f96317eed174" containerID="9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3" exitCode=0 Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.590988 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4tfsc" event={"ID":"45b4ac21-f124-42af-863d-f96317eed174","Type":"ContainerDied","Data":"9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3"} Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.591467 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4tfsc" event={"ID":"45b4ac21-f124-42af-863d-f96317eed174","Type":"ContainerDied","Data":"0c2aadd96889f0cb140729c5ab4aa009135c483b63fcf202fcf34ad012d4dfeb"} Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.591496 5049 scope.go:117] "RemoveContainer" containerID="9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.591071 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4tfsc" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.615505 5049 scope.go:117] "RemoveContainer" containerID="522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.638297 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4tfsc"] Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.645232 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4tfsc"] Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.652128 5049 scope.go:117] "RemoveContainer" containerID="f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.716424 5049 scope.go:117] "RemoveContainer" containerID="9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3" Jan 27 19:30:54 crc kubenswrapper[5049]: E0127 19:30:54.717091 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3\": container with ID starting with 9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3 not found: ID does not exist" containerID="9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.717132 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3"} err="failed to get container status \"9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3\": rpc error: code = NotFound desc = could not find container \"9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3\": container with ID starting with 9c8a640181f21585da1279e697bd4cd21e9cc7023c497177858b56ea4a287cb3 not found: ID does not exist" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.717166 5049 scope.go:117] "RemoveContainer" containerID="522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548" Jan 27 19:30:54 crc kubenswrapper[5049]: E0127 19:30:54.717743 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548\": container with ID starting with 522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548 not found: ID does not exist" containerID="522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.717776 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548"} err="failed to get container status \"522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548\": rpc error: code = NotFound desc = could not find container \"522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548\": container with ID starting with 522e413a0e6b371bc1e6fe9a8c583897c8cd72c25af8e6f7846eeb38a47b7548 not found: ID does not exist" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.717799 5049 scope.go:117] "RemoveContainer" containerID="f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35" Jan 27 19:30:54 crc kubenswrapper[5049]: E0127 19:30:54.718259 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35\": container with ID starting with f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35 not found: ID does not exist" containerID="f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35" Jan 27 19:30:54 crc kubenswrapper[5049]: I0127 19:30:54.718371 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35"} err="failed to get container status \"f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35\": rpc error: code = NotFound desc = could not find container \"f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35\": container with ID starting with f1e69175befe1f9d82b7cfbd06d87cf78646f013b0db18a19564e4b7dd935e35 not found: ID does not exist" Jan 27 19:30:55 crc kubenswrapper[5049]: I0127 19:30:55.660862 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b4ac21-f124-42af-863d-f96317eed174" path="/var/lib/kubelet/pods/45b4ac21-f124-42af-863d-f96317eed174/volumes" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.019905 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zb7z9"] Jan 27 19:32:25 crc kubenswrapper[5049]: E0127 19:32:25.020826 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b4ac21-f124-42af-863d-f96317eed174" containerName="extract-content" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.020839 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b4ac21-f124-42af-863d-f96317eed174" containerName="extract-content" Jan 27 19:32:25 crc kubenswrapper[5049]: E0127 19:32:25.020867 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b4ac21-f124-42af-863d-f96317eed174" containerName="extract-utilities" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.020872 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b4ac21-f124-42af-863d-f96317eed174" containerName="extract-utilities" Jan 27 19:32:25 crc kubenswrapper[5049]: E0127 19:32:25.020881 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b4ac21-f124-42af-863d-f96317eed174" containerName="registry-server" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.020888 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b4ac21-f124-42af-863d-f96317eed174" containerName="registry-server" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.021066 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b4ac21-f124-42af-863d-f96317eed174" containerName="registry-server" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.022342 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.047404 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zb7z9"] Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.133572 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztnmw\" (UniqueName: \"kubernetes.io/projected/211c53d6-3847-4a91-ba27-162ce3aee63c-kube-api-access-ztnmw\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.133640 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-utilities\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.133725 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-catalog-content\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.235692 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztnmw\" (UniqueName: \"kubernetes.io/projected/211c53d6-3847-4a91-ba27-162ce3aee63c-kube-api-access-ztnmw\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.235794 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-utilities\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.235891 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-catalog-content\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.236301 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-utilities\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.236392 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-catalog-content\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.266374 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztnmw\" (UniqueName: \"kubernetes.io/projected/211c53d6-3847-4a91-ba27-162ce3aee63c-kube-api-access-ztnmw\") pod \"redhat-operators-zb7z9\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:25 crc kubenswrapper[5049]: I0127 19:32:25.345196 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:26 crc kubenswrapper[5049]: I0127 19:32:26.011626 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zb7z9"] Jan 27 19:32:26 crc kubenswrapper[5049]: I0127 19:32:26.569524 5049 generic.go:334] "Generic (PLEG): container finished" podID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerID="2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c" exitCode=0 Jan 27 19:32:26 crc kubenswrapper[5049]: I0127 19:32:26.569640 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb7z9" event={"ID":"211c53d6-3847-4a91-ba27-162ce3aee63c","Type":"ContainerDied","Data":"2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c"} Jan 27 19:32:26 crc kubenswrapper[5049]: I0127 19:32:26.569902 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb7z9" event={"ID":"211c53d6-3847-4a91-ba27-162ce3aee63c","Type":"ContainerStarted","Data":"a0e60fdd620fd2360207e0b18bb9332d3c715ce7ebfbd5d435c1e308e46ec24a"} Jan 27 19:32:27 crc kubenswrapper[5049]: I0127 19:32:27.578140 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb7z9" event={"ID":"211c53d6-3847-4a91-ba27-162ce3aee63c","Type":"ContainerStarted","Data":"48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a"} Jan 27 19:32:28 crc kubenswrapper[5049]: I0127 19:32:28.594656 5049 generic.go:334] "Generic (PLEG): container finished" podID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerID="48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a" exitCode=0 Jan 27 19:32:28 crc kubenswrapper[5049]: I0127 19:32:28.594748 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb7z9" event={"ID":"211c53d6-3847-4a91-ba27-162ce3aee63c","Type":"ContainerDied","Data":"48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a"} Jan 27 19:32:29 crc kubenswrapper[5049]: I0127 19:32:29.605710 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb7z9" event={"ID":"211c53d6-3847-4a91-ba27-162ce3aee63c","Type":"ContainerStarted","Data":"7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62"} Jan 27 19:32:29 crc kubenswrapper[5049]: I0127 19:32:29.628466 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zb7z9" podStartSLOduration=2.098728364 podStartE2EDuration="4.628447364s" podCreationTimestamp="2026-01-27 19:32:25 +0000 UTC" firstStartedPulling="2026-01-27 19:32:26.57128845 +0000 UTC m=+9321.670261999" lastFinishedPulling="2026-01-27 19:32:29.10100745 +0000 UTC m=+9324.199980999" observedRunningTime="2026-01-27 19:32:29.625097268 +0000 UTC m=+9324.724070837" watchObservedRunningTime="2026-01-27 19:32:29.628447364 +0000 UTC m=+9324.727420913" Jan 27 19:32:35 crc kubenswrapper[5049]: I0127 19:32:35.346000 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:35 crc kubenswrapper[5049]: I0127 19:32:35.346537 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:36 crc kubenswrapper[5049]: I0127 19:32:36.392467 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zb7z9" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="registry-server" probeResult="failure" output=< Jan 27 19:32:36 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 19:32:36 crc kubenswrapper[5049]: > Jan 27 19:32:45 crc kubenswrapper[5049]: I0127 19:32:45.398268 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:45 crc kubenswrapper[5049]: I0127 19:32:45.445931 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:46 crc kubenswrapper[5049]: I0127 19:32:46.632777 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zb7z9"] Jan 27 19:32:46 crc kubenswrapper[5049]: I0127 19:32:46.759041 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zb7z9" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="registry-server" containerID="cri-o://7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62" gracePeriod=2 Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.557998 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.671780 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztnmw\" (UniqueName: \"kubernetes.io/projected/211c53d6-3847-4a91-ba27-162ce3aee63c-kube-api-access-ztnmw\") pod \"211c53d6-3847-4a91-ba27-162ce3aee63c\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.671955 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-utilities\") pod \"211c53d6-3847-4a91-ba27-162ce3aee63c\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.672183 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-catalog-content\") pod \"211c53d6-3847-4a91-ba27-162ce3aee63c\" (UID: \"211c53d6-3847-4a91-ba27-162ce3aee63c\") " Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.673017 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-utilities" (OuterVolumeSpecName: "utilities") pod "211c53d6-3847-4a91-ba27-162ce3aee63c" (UID: "211c53d6-3847-4a91-ba27-162ce3aee63c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.673486 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.680764 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211c53d6-3847-4a91-ba27-162ce3aee63c-kube-api-access-ztnmw" (OuterVolumeSpecName: "kube-api-access-ztnmw") pod "211c53d6-3847-4a91-ba27-162ce3aee63c" (UID: "211c53d6-3847-4a91-ba27-162ce3aee63c"). InnerVolumeSpecName "kube-api-access-ztnmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.775019 5049 generic.go:334] "Generic (PLEG): container finished" podID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerID="7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62" exitCode=0 Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.775052 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztnmw\" (UniqueName: \"kubernetes.io/projected/211c53d6-3847-4a91-ba27-162ce3aee63c-kube-api-access-ztnmw\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.775071 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb7z9" event={"ID":"211c53d6-3847-4a91-ba27-162ce3aee63c","Type":"ContainerDied","Data":"7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62"} Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.775106 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb7z9" event={"ID":"211c53d6-3847-4a91-ba27-162ce3aee63c","Type":"ContainerDied","Data":"a0e60fdd620fd2360207e0b18bb9332d3c715ce7ebfbd5d435c1e308e46ec24a"} Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.775112 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zb7z9" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.775137 5049 scope.go:117] "RemoveContainer" containerID="7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.781058 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.781510 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.791603 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "211c53d6-3847-4a91-ba27-162ce3aee63c" (UID: "211c53d6-3847-4a91-ba27-162ce3aee63c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.799234 5049 scope.go:117] "RemoveContainer" containerID="48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.818496 5049 scope.go:117] "RemoveContainer" containerID="2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.866471 5049 scope.go:117] "RemoveContainer" containerID="7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62" Jan 27 19:32:47 crc kubenswrapper[5049]: E0127 19:32:47.867202 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62\": container with ID starting with 7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62 not found: ID does not exist" containerID="7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.867244 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62"} err="failed to get container status \"7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62\": rpc error: code = NotFound desc = could not find container \"7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62\": container with ID starting with 7378d3d7cb1b593543f9f5132d9820a11be6a993950df9135ab0796da165cf62 not found: ID does not exist" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.867272 5049 scope.go:117] "RemoveContainer" containerID="48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a" Jan 27 19:32:47 crc kubenswrapper[5049]: E0127 19:32:47.867706 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a\": container with ID starting with 48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a not found: ID does not exist" containerID="48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.867736 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a"} err="failed to get container status \"48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a\": rpc error: code = NotFound desc = could not find container \"48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a\": container with ID starting with 48e0ca2de169b9a99a00942766941bb1bce81a01018cb99ee438910434010e2a not found: ID does not exist" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.867753 5049 scope.go:117] "RemoveContainer" containerID="2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c" Jan 27 19:32:47 crc kubenswrapper[5049]: E0127 19:32:47.868880 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c\": container with ID starting with 2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c not found: ID does not exist" containerID="2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.868944 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c"} err="failed to get container status \"2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c\": rpc error: code = NotFound desc = could not find container \"2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c\": container with ID starting with 2a7beb127a30600351ab8cf43cbee2b367d758d75775a3c209b27564179efe6c not found: ID does not exist" Jan 27 19:32:47 crc kubenswrapper[5049]: I0127 19:32:47.877804 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/211c53d6-3847-4a91-ba27-162ce3aee63c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:32:48 crc kubenswrapper[5049]: I0127 19:32:48.110785 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zb7z9"] Jan 27 19:32:48 crc kubenswrapper[5049]: I0127 19:32:48.118377 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zb7z9"] Jan 27 19:32:49 crc kubenswrapper[5049]: I0127 19:32:49.658816 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" path="/var/lib/kubelet/pods/211c53d6-3847-4a91-ba27-162ce3aee63c/volumes" Jan 27 19:33:17 crc kubenswrapper[5049]: I0127 19:33:17.781572 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:33:17 crc kubenswrapper[5049]: I0127 19:33:17.782277 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:33:47 crc kubenswrapper[5049]: I0127 19:33:47.781830 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:33:47 crc kubenswrapper[5049]: I0127 19:33:47.782450 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:33:47 crc kubenswrapper[5049]: I0127 19:33:47.782504 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:33:47 crc kubenswrapper[5049]: I0127 19:33:47.783362 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fba46313af7925fc1e3bbc404383696fb33297e02b77b2e53ba038380beb78cb"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:33:47 crc kubenswrapper[5049]: I0127 19:33:47.783418 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://fba46313af7925fc1e3bbc404383696fb33297e02b77b2e53ba038380beb78cb" gracePeriod=600 Jan 27 19:33:48 crc kubenswrapper[5049]: I0127 19:33:48.348467 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="fba46313af7925fc1e3bbc404383696fb33297e02b77b2e53ba038380beb78cb" exitCode=0 Jan 27 19:33:48 crc kubenswrapper[5049]: I0127 19:33:48.348514 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"fba46313af7925fc1e3bbc404383696fb33297e02b77b2e53ba038380beb78cb"} Jan 27 19:33:48 crc kubenswrapper[5049]: I0127 19:33:48.349326 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07"} Jan 27 19:33:48 crc kubenswrapper[5049]: I0127 19:33:48.349383 5049 scope.go:117] "RemoveContainer" containerID="c499bbff755650be8b0991c295d456b6c4e619db1d3504babee3f2c519f0ca6d" Jan 27 19:36:17 crc kubenswrapper[5049]: I0127 19:36:17.781822 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:36:17 crc kubenswrapper[5049]: I0127 19:36:17.783015 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:36:47 crc kubenswrapper[5049]: I0127 19:36:47.781623 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:36:47 crc kubenswrapper[5049]: I0127 19:36:47.782446 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:37:17 crc kubenswrapper[5049]: I0127 19:37:17.781921 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:37:17 crc kubenswrapper[5049]: I0127 19:37:17.782791 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:37:17 crc kubenswrapper[5049]: I0127 19:37:17.782874 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:37:17 crc kubenswrapper[5049]: I0127 19:37:17.784396 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:37:17 crc kubenswrapper[5049]: I0127 19:37:17.784529 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" gracePeriod=600 Jan 27 19:37:18 crc kubenswrapper[5049]: E0127 19:37:18.106079 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:37:18 crc kubenswrapper[5049]: I0127 19:37:18.393136 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" exitCode=0 Jan 27 19:37:18 crc kubenswrapper[5049]: I0127 19:37:18.393190 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07"} Jan 27 19:37:18 crc kubenswrapper[5049]: I0127 19:37:18.393233 5049 scope.go:117] "RemoveContainer" containerID="fba46313af7925fc1e3bbc404383696fb33297e02b77b2e53ba038380beb78cb" Jan 27 19:37:18 crc kubenswrapper[5049]: I0127 19:37:18.394026 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:37:18 crc kubenswrapper[5049]: E0127 19:37:18.394430 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:37:31 crc kubenswrapper[5049]: I0127 19:37:31.647184 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:37:31 crc kubenswrapper[5049]: E0127 19:37:31.648464 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:37:42 crc kubenswrapper[5049]: I0127 19:37:42.646789 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:37:42 crc kubenswrapper[5049]: E0127 19:37:42.647795 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:37:57 crc kubenswrapper[5049]: I0127 19:37:57.645995 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:37:57 crc kubenswrapper[5049]: E0127 19:37:57.648313 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:38:11 crc kubenswrapper[5049]: I0127 19:38:11.646823 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:38:11 crc kubenswrapper[5049]: E0127 19:38:11.647573 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:38:25 crc kubenswrapper[5049]: I0127 19:38:25.667569 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:38:25 crc kubenswrapper[5049]: E0127 19:38:25.668861 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:38:39 crc kubenswrapper[5049]: I0127 19:38:39.647553 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:38:39 crc kubenswrapper[5049]: E0127 19:38:39.648522 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:38:53 crc kubenswrapper[5049]: I0127 19:38:53.646567 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:38:53 crc kubenswrapper[5049]: E0127 19:38:53.647475 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.283290 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8pdzp"] Jan 27 19:39:03 crc kubenswrapper[5049]: E0127 19:39:03.284232 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="extract-utilities" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.284250 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="extract-utilities" Jan 27 19:39:03 crc kubenswrapper[5049]: E0127 19:39:03.284285 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="registry-server" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.284293 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="registry-server" Jan 27 19:39:03 crc kubenswrapper[5049]: E0127 19:39:03.284320 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="extract-content" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.284330 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="extract-content" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.284574 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="211c53d6-3847-4a91-ba27-162ce3aee63c" containerName="registry-server" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.286235 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.302591 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pdzp"] Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.454449 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-utilities\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.454803 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767x8\" (UniqueName: \"kubernetes.io/projected/997914a2-beda-4721-9d54-9606ebf77565-kube-api-access-767x8\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.454877 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-catalog-content\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.556555 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-catalog-content\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.556705 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-utilities\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.556779 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767x8\" (UniqueName: \"kubernetes.io/projected/997914a2-beda-4721-9d54-9606ebf77565-kube-api-access-767x8\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.557477 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-utilities\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.557544 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-catalog-content\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.582529 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767x8\" (UniqueName: \"kubernetes.io/projected/997914a2-beda-4721-9d54-9606ebf77565-kube-api-access-767x8\") pod \"redhat-marketplace-8pdzp\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:03 crc kubenswrapper[5049]: I0127 19:39:03.616333 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:04 crc kubenswrapper[5049]: I0127 19:39:04.157827 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pdzp"] Jan 27 19:39:04 crc kubenswrapper[5049]: W0127 19:39:04.168457 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod997914a2_beda_4721_9d54_9606ebf77565.slice/crio-0c20bb1a88795cf17ecb9fb1bf9dcbdd5519dba0a1125a4f0e15c6867cbb7197 WatchSource:0}: Error finding container 0c20bb1a88795cf17ecb9fb1bf9dcbdd5519dba0a1125a4f0e15c6867cbb7197: Status 404 returned error can't find the container with id 0c20bb1a88795cf17ecb9fb1bf9dcbdd5519dba0a1125a4f0e15c6867cbb7197 Jan 27 19:39:04 crc kubenswrapper[5049]: I0127 19:39:04.410265 5049 generic.go:334] "Generic (PLEG): container finished" podID="997914a2-beda-4721-9d54-9606ebf77565" containerID="305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925" exitCode=0 Jan 27 19:39:04 crc kubenswrapper[5049]: I0127 19:39:04.410352 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pdzp" event={"ID":"997914a2-beda-4721-9d54-9606ebf77565","Type":"ContainerDied","Data":"305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925"} Jan 27 19:39:04 crc kubenswrapper[5049]: I0127 19:39:04.410641 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pdzp" event={"ID":"997914a2-beda-4721-9d54-9606ebf77565","Type":"ContainerStarted","Data":"0c20bb1a88795cf17ecb9fb1bf9dcbdd5519dba0a1125a4f0e15c6867cbb7197"} Jan 27 19:39:04 crc kubenswrapper[5049]: I0127 19:39:04.412482 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:39:05 crc kubenswrapper[5049]: I0127 19:39:05.651788 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:39:05 crc kubenswrapper[5049]: E0127 19:39:05.652485 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:39:06 crc kubenswrapper[5049]: I0127 19:39:06.431983 5049 generic.go:334] "Generic (PLEG): container finished" podID="997914a2-beda-4721-9d54-9606ebf77565" containerID="ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b" exitCode=0 Jan 27 19:39:06 crc kubenswrapper[5049]: I0127 19:39:06.432029 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pdzp" event={"ID":"997914a2-beda-4721-9d54-9606ebf77565","Type":"ContainerDied","Data":"ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b"} Jan 27 19:39:07 crc kubenswrapper[5049]: I0127 19:39:07.445134 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pdzp" event={"ID":"997914a2-beda-4721-9d54-9606ebf77565","Type":"ContainerStarted","Data":"8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a"} Jan 27 19:39:13 crc kubenswrapper[5049]: I0127 19:39:13.616548 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:13 crc kubenswrapper[5049]: I0127 19:39:13.617131 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:13 crc kubenswrapper[5049]: I0127 19:39:13.669605 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:13 crc kubenswrapper[5049]: I0127 19:39:13.701937 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8pdzp" podStartSLOduration=8.312892135 podStartE2EDuration="10.701911986s" podCreationTimestamp="2026-01-27 19:39:03 +0000 UTC" firstStartedPulling="2026-01-27 19:39:04.412211574 +0000 UTC m=+9719.511185123" lastFinishedPulling="2026-01-27 19:39:06.801231425 +0000 UTC m=+9721.900204974" observedRunningTime="2026-01-27 19:39:07.464291693 +0000 UTC m=+9722.563265252" watchObservedRunningTime="2026-01-27 19:39:13.701911986 +0000 UTC m=+9728.800885535" Jan 27 19:39:14 crc kubenswrapper[5049]: I0127 19:39:14.563652 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:14 crc kubenswrapper[5049]: I0127 19:39:14.611135 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pdzp"] Jan 27 19:39:16 crc kubenswrapper[5049]: I0127 19:39:16.524904 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8pdzp" podUID="997914a2-beda-4721-9d54-9606ebf77565" containerName="registry-server" containerID="cri-o://8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a" gracePeriod=2 Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.011533 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.059542 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-utilities\") pod \"997914a2-beda-4721-9d54-9606ebf77565\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.059661 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-767x8\" (UniqueName: \"kubernetes.io/projected/997914a2-beda-4721-9d54-9606ebf77565-kube-api-access-767x8\") pod \"997914a2-beda-4721-9d54-9606ebf77565\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.059738 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-catalog-content\") pod \"997914a2-beda-4721-9d54-9606ebf77565\" (UID: \"997914a2-beda-4721-9d54-9606ebf77565\") " Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.060690 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-utilities" (OuterVolumeSpecName: "utilities") pod "997914a2-beda-4721-9d54-9606ebf77565" (UID: "997914a2-beda-4721-9d54-9606ebf77565"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.078361 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/997914a2-beda-4721-9d54-9606ebf77565-kube-api-access-767x8" (OuterVolumeSpecName: "kube-api-access-767x8") pod "997914a2-beda-4721-9d54-9606ebf77565" (UID: "997914a2-beda-4721-9d54-9606ebf77565"). InnerVolumeSpecName "kube-api-access-767x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.084944 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "997914a2-beda-4721-9d54-9606ebf77565" (UID: "997914a2-beda-4721-9d54-9606ebf77565"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.161900 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-767x8\" (UniqueName: \"kubernetes.io/projected/997914a2-beda-4721-9d54-9606ebf77565-kube-api-access-767x8\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.161969 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.161984 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/997914a2-beda-4721-9d54-9606ebf77565-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.536300 5049 generic.go:334] "Generic (PLEG): container finished" podID="997914a2-beda-4721-9d54-9606ebf77565" containerID="8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a" exitCode=0 Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.536349 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pdzp" event={"ID":"997914a2-beda-4721-9d54-9606ebf77565","Type":"ContainerDied","Data":"8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a"} Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.536385 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8pdzp" event={"ID":"997914a2-beda-4721-9d54-9606ebf77565","Type":"ContainerDied","Data":"0c20bb1a88795cf17ecb9fb1bf9dcbdd5519dba0a1125a4f0e15c6867cbb7197"} Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.536373 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8pdzp" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.536410 5049 scope.go:117] "RemoveContainer" containerID="8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.566182 5049 scope.go:117] "RemoveContainer" containerID="ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.583648 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pdzp"] Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.592576 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8pdzp"] Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.597640 5049 scope.go:117] "RemoveContainer" containerID="305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.641247 5049 scope.go:117] "RemoveContainer" containerID="8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a" Jan 27 19:39:17 crc kubenswrapper[5049]: E0127 19:39:17.641595 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a\": container with ID starting with 8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a not found: ID does not exist" containerID="8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.641638 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a"} err="failed to get container status \"8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a\": rpc error: code = NotFound desc = could not find container \"8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a\": container with ID starting with 8d2261735ed5e734f96b062ed6e4ff6fbb91c4ae78b93e66ab2185161b3fa21a not found: ID does not exist" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.641659 5049 scope.go:117] "RemoveContainer" containerID="ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b" Jan 27 19:39:17 crc kubenswrapper[5049]: E0127 19:39:17.641914 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b\": container with ID starting with ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b not found: ID does not exist" containerID="ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.641938 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b"} err="failed to get container status \"ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b\": rpc error: code = NotFound desc = could not find container \"ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b\": container with ID starting with ca2626aab2363d631168796cb9d8e0f470c73163ff1467ff8baa256f9700a37b not found: ID does not exist" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.641952 5049 scope.go:117] "RemoveContainer" containerID="305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925" Jan 27 19:39:17 crc kubenswrapper[5049]: E0127 19:39:17.642175 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925\": container with ID starting with 305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925 not found: ID does not exist" containerID="305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.642197 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925"} err="failed to get container status \"305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925\": rpc error: code = NotFound desc = could not find container \"305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925\": container with ID starting with 305e5ff3929fe36bda878b119cc7ca944bebf817e8298187184bfe20b0b10925 not found: ID does not exist" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.647061 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:39:17 crc kubenswrapper[5049]: E0127 19:39:17.647346 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:39:17 crc kubenswrapper[5049]: I0127 19:39:17.660617 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="997914a2-beda-4721-9d54-9606ebf77565" path="/var/lib/kubelet/pods/997914a2-beda-4721-9d54-9606ebf77565/volumes" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.489019 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kmqdl"] Jan 27 19:39:19 crc kubenswrapper[5049]: E0127 19:39:19.489745 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="997914a2-beda-4721-9d54-9606ebf77565" containerName="registry-server" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.489762 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="997914a2-beda-4721-9d54-9606ebf77565" containerName="registry-server" Jan 27 19:39:19 crc kubenswrapper[5049]: E0127 19:39:19.489774 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="997914a2-beda-4721-9d54-9606ebf77565" containerName="extract-utilities" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.489781 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="997914a2-beda-4721-9d54-9606ebf77565" containerName="extract-utilities" Jan 27 19:39:19 crc kubenswrapper[5049]: E0127 19:39:19.489811 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="997914a2-beda-4721-9d54-9606ebf77565" containerName="extract-content" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.489819 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="997914a2-beda-4721-9d54-9606ebf77565" containerName="extract-content" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.490021 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="997914a2-beda-4721-9d54-9606ebf77565" containerName="registry-server" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.491658 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.499500 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmqdl"] Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.512031 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-catalog-content\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.512164 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wghg\" (UniqueName: \"kubernetes.io/projected/d35498c4-ab23-491c-8ddb-7d33e959fa0e-kube-api-access-4wghg\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.512221 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-utilities\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.614383 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-catalog-content\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.614597 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wghg\" (UniqueName: \"kubernetes.io/projected/d35498c4-ab23-491c-8ddb-7d33e959fa0e-kube-api-access-4wghg\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.614650 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-utilities\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.614856 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-catalog-content\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.615215 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-utilities\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.640152 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wghg\" (UniqueName: \"kubernetes.io/projected/d35498c4-ab23-491c-8ddb-7d33e959fa0e-kube-api-access-4wghg\") pod \"certified-operators-kmqdl\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:19 crc kubenswrapper[5049]: I0127 19:39:19.834571 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:20 crc kubenswrapper[5049]: I0127 19:39:20.353992 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmqdl"] Jan 27 19:39:20 crc kubenswrapper[5049]: I0127 19:39:20.573025 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmqdl" event={"ID":"d35498c4-ab23-491c-8ddb-7d33e959fa0e","Type":"ContainerStarted","Data":"bbb3a76d986fb66b8cccf8995f270ffc7437adb9aad178b816f9a88c8ba9dbd3"} Jan 27 19:39:21 crc kubenswrapper[5049]: I0127 19:39:21.584477 5049 generic.go:334] "Generic (PLEG): container finished" podID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerID="d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61" exitCode=0 Jan 27 19:39:21 crc kubenswrapper[5049]: I0127 19:39:21.584559 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmqdl" event={"ID":"d35498c4-ab23-491c-8ddb-7d33e959fa0e","Type":"ContainerDied","Data":"d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61"} Jan 27 19:39:22 crc kubenswrapper[5049]: I0127 19:39:22.596897 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmqdl" event={"ID":"d35498c4-ab23-491c-8ddb-7d33e959fa0e","Type":"ContainerStarted","Data":"c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44"} Jan 27 19:39:23 crc kubenswrapper[5049]: I0127 19:39:23.608120 5049 generic.go:334] "Generic (PLEG): container finished" podID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerID="c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44" exitCode=0 Jan 27 19:39:23 crc kubenswrapper[5049]: I0127 19:39:23.608165 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmqdl" event={"ID":"d35498c4-ab23-491c-8ddb-7d33e959fa0e","Type":"ContainerDied","Data":"c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44"} Jan 27 19:39:24 crc kubenswrapper[5049]: I0127 19:39:24.621715 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmqdl" event={"ID":"d35498c4-ab23-491c-8ddb-7d33e959fa0e","Type":"ContainerStarted","Data":"affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7"} Jan 27 19:39:24 crc kubenswrapper[5049]: I0127 19:39:24.641967 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kmqdl" podStartSLOduration=3.198231455 podStartE2EDuration="5.641949972s" podCreationTimestamp="2026-01-27 19:39:19 +0000 UTC" firstStartedPulling="2026-01-27 19:39:21.587456435 +0000 UTC m=+9736.686429994" lastFinishedPulling="2026-01-27 19:39:24.031174962 +0000 UTC m=+9739.130148511" observedRunningTime="2026-01-27 19:39:24.640845941 +0000 UTC m=+9739.739819490" watchObservedRunningTime="2026-01-27 19:39:24.641949972 +0000 UTC m=+9739.740923521" Jan 27 19:39:29 crc kubenswrapper[5049]: I0127 19:39:29.835047 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:29 crc kubenswrapper[5049]: I0127 19:39:29.835618 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:29 crc kubenswrapper[5049]: I0127 19:39:29.941867 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:30 crc kubenswrapper[5049]: I0127 19:39:30.724845 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:30 crc kubenswrapper[5049]: I0127 19:39:30.780747 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kmqdl"] Jan 27 19:39:31 crc kubenswrapper[5049]: I0127 19:39:31.646490 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:39:31 crc kubenswrapper[5049]: E0127 19:39:31.647218 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:39:32 crc kubenswrapper[5049]: I0127 19:39:32.696505 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kmqdl" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerName="registry-server" containerID="cri-o://affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7" gracePeriod=2 Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.332834 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.496502 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wghg\" (UniqueName: \"kubernetes.io/projected/d35498c4-ab23-491c-8ddb-7d33e959fa0e-kube-api-access-4wghg\") pod \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.496661 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-catalog-content\") pod \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.496755 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-utilities\") pod \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\" (UID: \"d35498c4-ab23-491c-8ddb-7d33e959fa0e\") " Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.497925 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-utilities" (OuterVolumeSpecName: "utilities") pod "d35498c4-ab23-491c-8ddb-7d33e959fa0e" (UID: "d35498c4-ab23-491c-8ddb-7d33e959fa0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.502242 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35498c4-ab23-491c-8ddb-7d33e959fa0e-kube-api-access-4wghg" (OuterVolumeSpecName: "kube-api-access-4wghg") pod "d35498c4-ab23-491c-8ddb-7d33e959fa0e" (UID: "d35498c4-ab23-491c-8ddb-7d33e959fa0e"). InnerVolumeSpecName "kube-api-access-4wghg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.548761 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d35498c4-ab23-491c-8ddb-7d33e959fa0e" (UID: "d35498c4-ab23-491c-8ddb-7d33e959fa0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.599284 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.599323 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d35498c4-ab23-491c-8ddb-7d33e959fa0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.599337 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wghg\" (UniqueName: \"kubernetes.io/projected/d35498c4-ab23-491c-8ddb-7d33e959fa0e-kube-api-access-4wghg\") on node \"crc\" DevicePath \"\"" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.710503 5049 generic.go:334] "Generic (PLEG): container finished" podID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerID="affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7" exitCode=0 Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.710570 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmqdl" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.710574 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmqdl" event={"ID":"d35498c4-ab23-491c-8ddb-7d33e959fa0e","Type":"ContainerDied","Data":"affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7"} Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.712353 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmqdl" event={"ID":"d35498c4-ab23-491c-8ddb-7d33e959fa0e","Type":"ContainerDied","Data":"bbb3a76d986fb66b8cccf8995f270ffc7437adb9aad178b816f9a88c8ba9dbd3"} Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.712381 5049 scope.go:117] "RemoveContainer" containerID="affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.759112 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kmqdl"] Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.759757 5049 scope.go:117] "RemoveContainer" containerID="c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.772134 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kmqdl"] Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.781658 5049 scope.go:117] "RemoveContainer" containerID="d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.840337 5049 scope.go:117] "RemoveContainer" containerID="affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7" Jan 27 19:39:33 crc kubenswrapper[5049]: E0127 19:39:33.840936 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7\": container with ID starting with affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7 not found: ID does not exist" containerID="affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.840975 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7"} err="failed to get container status \"affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7\": rpc error: code = NotFound desc = could not find container \"affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7\": container with ID starting with affbd86f910f5d31f7653a45c594caa83d5ed3b3c0465dbf1e146c53fa114bf7 not found: ID does not exist" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.841013 5049 scope.go:117] "RemoveContainer" containerID="c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44" Jan 27 19:39:33 crc kubenswrapper[5049]: E0127 19:39:33.841469 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44\": container with ID starting with c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44 not found: ID does not exist" containerID="c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.841512 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44"} err="failed to get container status \"c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44\": rpc error: code = NotFound desc = could not find container \"c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44\": container with ID starting with c7bba8a48853e908a753b65c0d090c2bc1b4d44c35ec49aae89f541728c3ce44 not found: ID does not exist" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.841528 5049 scope.go:117] "RemoveContainer" containerID="d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61" Jan 27 19:39:33 crc kubenswrapper[5049]: E0127 19:39:33.841991 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61\": container with ID starting with d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61 not found: ID does not exist" containerID="d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61" Jan 27 19:39:33 crc kubenswrapper[5049]: I0127 19:39:33.842018 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61"} err="failed to get container status \"d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61\": rpc error: code = NotFound desc = could not find container \"d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61\": container with ID starting with d1e8de90f9704a55d674cc72307de3949e50c378306afb50ca508e3954867e61 not found: ID does not exist" Jan 27 19:39:35 crc kubenswrapper[5049]: I0127 19:39:35.670970 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" path="/var/lib/kubelet/pods/d35498c4-ab23-491c-8ddb-7d33e959fa0e/volumes" Jan 27 19:39:45 crc kubenswrapper[5049]: I0127 19:39:45.661378 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:39:45 crc kubenswrapper[5049]: E0127 19:39:45.662670 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:39:57 crc kubenswrapper[5049]: I0127 19:39:57.646784 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:39:57 crc kubenswrapper[5049]: E0127 19:39:57.647587 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:40:08 crc kubenswrapper[5049]: I0127 19:40:08.646378 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:40:08 crc kubenswrapper[5049]: E0127 19:40:08.647234 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:40:19 crc kubenswrapper[5049]: I0127 19:40:19.650615 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:40:19 crc kubenswrapper[5049]: E0127 19:40:19.651848 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:40:32 crc kubenswrapper[5049]: I0127 19:40:32.646654 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:40:32 crc kubenswrapper[5049]: E0127 19:40:32.647624 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:40:45 crc kubenswrapper[5049]: I0127 19:40:45.650811 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:40:45 crc kubenswrapper[5049]: E0127 19:40:45.652295 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:40:56 crc kubenswrapper[5049]: I0127 19:40:56.646931 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:40:56 crc kubenswrapper[5049]: E0127 19:40:56.647872 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:41:07 crc kubenswrapper[5049]: I0127 19:41:07.646218 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:41:07 crc kubenswrapper[5049]: E0127 19:41:07.647031 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:41:21 crc kubenswrapper[5049]: I0127 19:41:21.647209 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:41:21 crc kubenswrapper[5049]: E0127 19:41:21.648510 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:41:32 crc kubenswrapper[5049]: I0127 19:41:32.646085 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:41:32 crc kubenswrapper[5049]: E0127 19:41:32.648093 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:41:47 crc kubenswrapper[5049]: I0127 19:41:47.646960 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:41:47 crc kubenswrapper[5049]: E0127 19:41:47.647883 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.817506 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6879"] Jan 27 19:41:52 crc kubenswrapper[5049]: E0127 19:41:52.818801 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerName="registry-server" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.818818 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerName="registry-server" Jan 27 19:41:52 crc kubenswrapper[5049]: E0127 19:41:52.818847 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerName="extract-content" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.818854 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerName="extract-content" Jan 27 19:41:52 crc kubenswrapper[5049]: E0127 19:41:52.818874 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerName="extract-utilities" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.818885 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerName="extract-utilities" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.819124 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35498c4-ab23-491c-8ddb-7d33e959fa0e" containerName="registry-server" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.824151 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.844154 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6879"] Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.938006 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-catalog-content\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.938094 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-utilities\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:52 crc kubenswrapper[5049]: I0127 19:41:52.938192 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2r9\" (UniqueName: \"kubernetes.io/projected/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-kube-api-access-zm2r9\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:53 crc kubenswrapper[5049]: I0127 19:41:53.040224 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-catalog-content\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:53 crc kubenswrapper[5049]: I0127 19:41:53.040277 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-utilities\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:53 crc kubenswrapper[5049]: I0127 19:41:53.040365 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm2r9\" (UniqueName: \"kubernetes.io/projected/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-kube-api-access-zm2r9\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:53 crc kubenswrapper[5049]: I0127 19:41:53.040962 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-catalog-content\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:53 crc kubenswrapper[5049]: I0127 19:41:53.041358 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-utilities\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:53 crc kubenswrapper[5049]: I0127 19:41:53.063456 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm2r9\" (UniqueName: \"kubernetes.io/projected/cc0438a5-9988-47c7-a925-5a22a6d6a1d4-kube-api-access-zm2r9\") pod \"community-operators-s6879\" (UID: \"cc0438a5-9988-47c7-a925-5a22a6d6a1d4\") " pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:53 crc kubenswrapper[5049]: I0127 19:41:53.166872 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6879" Jan 27 19:41:53 crc kubenswrapper[5049]: I0127 19:41:53.701717 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6879"] Jan 27 19:41:54 crc kubenswrapper[5049]: I0127 19:41:54.034235 5049 generic.go:334] "Generic (PLEG): container finished" podID="cc0438a5-9988-47c7-a925-5a22a6d6a1d4" containerID="6dd29cb00dabb21c09ac316c86729d8c6c4afa1fa80c5dda2296a10052c95b82" exitCode=0 Jan 27 19:41:54 crc kubenswrapper[5049]: I0127 19:41:54.034338 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6879" event={"ID":"cc0438a5-9988-47c7-a925-5a22a6d6a1d4","Type":"ContainerDied","Data":"6dd29cb00dabb21c09ac316c86729d8c6c4afa1fa80c5dda2296a10052c95b82"} Jan 27 19:41:54 crc kubenswrapper[5049]: I0127 19:41:54.035826 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6879" event={"ID":"cc0438a5-9988-47c7-a925-5a22a6d6a1d4","Type":"ContainerStarted","Data":"b16f2a0752850b1fe2147e4704a1111e87bd9355cc38beaae7ee29f731c09b2c"} Jan 27 19:41:58 crc kubenswrapper[5049]: I0127 19:41:58.071182 5049 generic.go:334] "Generic (PLEG): container finished" podID="cc0438a5-9988-47c7-a925-5a22a6d6a1d4" containerID="e4e87759de5300c86eebbfcd6e7c2fd0526f037d4a9abd0881494b4dced0bf87" exitCode=0 Jan 27 19:41:58 crc kubenswrapper[5049]: I0127 19:41:58.071415 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6879" event={"ID":"cc0438a5-9988-47c7-a925-5a22a6d6a1d4","Type":"ContainerDied","Data":"e4e87759de5300c86eebbfcd6e7c2fd0526f037d4a9abd0881494b4dced0bf87"} Jan 27 19:41:59 crc kubenswrapper[5049]: I0127 19:41:59.081976 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6879" event={"ID":"cc0438a5-9988-47c7-a925-5a22a6d6a1d4","Type":"ContainerStarted","Data":"f870be07c4a12227257a9e0cd27b4a30625a8eefe0548ffc888d2fea06fc806e"} Jan 27 19:41:59 crc kubenswrapper[5049]: I0127 19:41:59.103203 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6879" podStartSLOduration=2.304891756 podStartE2EDuration="7.103174866s" podCreationTimestamp="2026-01-27 19:41:52 +0000 UTC" firstStartedPulling="2026-01-27 19:41:54.038267443 +0000 UTC m=+9889.137241022" lastFinishedPulling="2026-01-27 19:41:58.836550563 +0000 UTC m=+9893.935524132" observedRunningTime="2026-01-27 19:41:59.099362438 +0000 UTC m=+9894.198336007" watchObservedRunningTime="2026-01-27 19:41:59.103174866 +0000 UTC m=+9894.202148455" Jan 27 19:42:01 crc kubenswrapper[5049]: I0127 19:42:01.646327 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:42:01 crc kubenswrapper[5049]: E0127 19:42:01.647253 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:42:03 crc kubenswrapper[5049]: I0127 19:42:03.168961 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6879" Jan 27 19:42:03 crc kubenswrapper[5049]: I0127 19:42:03.169762 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6879" Jan 27 19:42:03 crc kubenswrapper[5049]: I0127 19:42:03.235641 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6879" Jan 27 19:42:04 crc kubenswrapper[5049]: I0127 19:42:04.188184 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6879" Jan 27 19:42:04 crc kubenswrapper[5049]: I0127 19:42:04.269509 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6879"] Jan 27 19:42:04 crc kubenswrapper[5049]: I0127 19:42:04.297054 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-klhhr"] Jan 27 19:42:04 crc kubenswrapper[5049]: I0127 19:42:04.297310 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-klhhr" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerName="registry-server" containerID="cri-o://fb79725d74f771b76164fb03d1e54849261edd176cffccfcbef2d1d11255cb11" gracePeriod=2 Jan 27 19:42:05 crc kubenswrapper[5049]: I0127 19:42:05.135428 5049 generic.go:334] "Generic (PLEG): container finished" podID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerID="fb79725d74f771b76164fb03d1e54849261edd176cffccfcbef2d1d11255cb11" exitCode=0 Jan 27 19:42:05 crc kubenswrapper[5049]: I0127 19:42:05.136701 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klhhr" event={"ID":"a01703fa-3762-4dd4-99d8-5f911bdc85b5","Type":"ContainerDied","Data":"fb79725d74f771b76164fb03d1e54849261edd176cffccfcbef2d1d11255cb11"} Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.078256 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klhhr" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.150394 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klhhr" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.150855 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klhhr" event={"ID":"a01703fa-3762-4dd4-99d8-5f911bdc85b5","Type":"ContainerDied","Data":"4fd608a003fb060ed91a14ab4bb614de937f15d8728cfcb96398ff320e3f9d66"} Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.150958 5049 scope.go:117] "RemoveContainer" containerID="fb79725d74f771b76164fb03d1e54849261edd176cffccfcbef2d1d11255cb11" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.183005 5049 scope.go:117] "RemoveContainer" containerID="04619863374d76c28ae28f7af9c58cb25c90e741b89e96c9b656199b34349300" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.193197 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-catalog-content\") pod \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.193342 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-utilities\") pod \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.193533 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc8bg\" (UniqueName: \"kubernetes.io/projected/a01703fa-3762-4dd4-99d8-5f911bdc85b5-kube-api-access-nc8bg\") pod \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\" (UID: \"a01703fa-3762-4dd4-99d8-5f911bdc85b5\") " Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.202446 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-utilities" (OuterVolumeSpecName: "utilities") pod "a01703fa-3762-4dd4-99d8-5f911bdc85b5" (UID: "a01703fa-3762-4dd4-99d8-5f911bdc85b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.210133 5049 scope.go:117] "RemoveContainer" containerID="99959444824e8c2b5d12b600e9c355d1cb0314f0ad62ca7e4e1e0aceb7a4a852" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.210929 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01703fa-3762-4dd4-99d8-5f911bdc85b5-kube-api-access-nc8bg" (OuterVolumeSpecName: "kube-api-access-nc8bg") pod "a01703fa-3762-4dd4-99d8-5f911bdc85b5" (UID: "a01703fa-3762-4dd4-99d8-5f911bdc85b5"). InnerVolumeSpecName "kube-api-access-nc8bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.298711 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.298750 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc8bg\" (UniqueName: \"kubernetes.io/projected/a01703fa-3762-4dd4-99d8-5f911bdc85b5-kube-api-access-nc8bg\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.304515 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a01703fa-3762-4dd4-99d8-5f911bdc85b5" (UID: "a01703fa-3762-4dd4-99d8-5f911bdc85b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.401162 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a01703fa-3762-4dd4-99d8-5f911bdc85b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.503582 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-klhhr"] Jan 27 19:42:06 crc kubenswrapper[5049]: I0127 19:42:06.515721 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-klhhr"] Jan 27 19:42:07 crc kubenswrapper[5049]: I0127 19:42:07.657261 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" path="/var/lib/kubelet/pods/a01703fa-3762-4dd4-99d8-5f911bdc85b5/volumes" Jan 27 19:42:16 crc kubenswrapper[5049]: I0127 19:42:16.647168 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:42:16 crc kubenswrapper[5049]: E0127 19:42:16.648117 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.542536 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mtjn2/must-gather-sjljc"] Jan 27 19:42:24 crc kubenswrapper[5049]: E0127 19:42:24.543622 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerName="extract-utilities" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.543749 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerName="extract-utilities" Jan 27 19:42:24 crc kubenswrapper[5049]: E0127 19:42:24.543766 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerName="registry-server" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.543776 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerName="registry-server" Jan 27 19:42:24 crc kubenswrapper[5049]: E0127 19:42:24.543816 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerName="extract-content" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.543824 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerName="extract-content" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.544076 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01703fa-3762-4dd4-99d8-5f911bdc85b5" containerName="registry-server" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.545162 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.548426 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mtjn2"/"default-dockercfg-phq2b" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.548758 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mtjn2"/"kube-root-ca.crt" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.548921 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mtjn2"/"openshift-service-ca.crt" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.553205 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mtjn2/must-gather-sjljc"] Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.693002 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6fr\" (UniqueName: \"kubernetes.io/projected/629a46ea-c5f0-485b-9937-50eaca5ed965-kube-api-access-zc6fr\") pod \"must-gather-sjljc\" (UID: \"629a46ea-c5f0-485b-9937-50eaca5ed965\") " pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.693058 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/629a46ea-c5f0-485b-9937-50eaca5ed965-must-gather-output\") pod \"must-gather-sjljc\" (UID: \"629a46ea-c5f0-485b-9937-50eaca5ed965\") " pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.794777 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6fr\" (UniqueName: \"kubernetes.io/projected/629a46ea-c5f0-485b-9937-50eaca5ed965-kube-api-access-zc6fr\") pod \"must-gather-sjljc\" (UID: \"629a46ea-c5f0-485b-9937-50eaca5ed965\") " pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.794835 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/629a46ea-c5f0-485b-9937-50eaca5ed965-must-gather-output\") pod \"must-gather-sjljc\" (UID: \"629a46ea-c5f0-485b-9937-50eaca5ed965\") " pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.795299 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/629a46ea-c5f0-485b-9937-50eaca5ed965-must-gather-output\") pod \"must-gather-sjljc\" (UID: \"629a46ea-c5f0-485b-9937-50eaca5ed965\") " pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.816578 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6fr\" (UniqueName: \"kubernetes.io/projected/629a46ea-c5f0-485b-9937-50eaca5ed965-kube-api-access-zc6fr\") pod \"must-gather-sjljc\" (UID: \"629a46ea-c5f0-485b-9937-50eaca5ed965\") " pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:42:24 crc kubenswrapper[5049]: I0127 19:42:24.864811 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:42:25 crc kubenswrapper[5049]: I0127 19:42:25.338193 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mtjn2/must-gather-sjljc"] Jan 27 19:42:26 crc kubenswrapper[5049]: I0127 19:42:26.339421 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/must-gather-sjljc" event={"ID":"629a46ea-c5f0-485b-9937-50eaca5ed965","Type":"ContainerStarted","Data":"654f3e15fd42c05552ac050d5cb5a5b874d3ab4fa424db02eeee9ca3d5e1fc9f"} Jan 27 19:42:31 crc kubenswrapper[5049]: I0127 19:42:31.646538 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:42:35 crc kubenswrapper[5049]: I0127 19:42:35.424464 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"a239abd78b727996fe25828bb1f7bd7d0fd88569e96b06f68db274a7a474a4a4"} Jan 27 19:42:35 crc kubenswrapper[5049]: I0127 19:42:35.429454 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/must-gather-sjljc" event={"ID":"629a46ea-c5f0-485b-9937-50eaca5ed965","Type":"ContainerStarted","Data":"f8499898143ee0eafcbd7bae630105dbc251feef23b27faf66e377d56d772b16"} Jan 27 19:42:36 crc kubenswrapper[5049]: I0127 19:42:36.438777 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/must-gather-sjljc" event={"ID":"629a46ea-c5f0-485b-9937-50eaca5ed965","Type":"ContainerStarted","Data":"0fd805b719c3109458d87288e5f1fbe4fa4531134077e0f28e64e31a951d4336"} Jan 27 19:42:36 crc kubenswrapper[5049]: I0127 19:42:36.457945 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mtjn2/must-gather-sjljc" podStartSLOduration=2.956047722 podStartE2EDuration="12.457924208s" podCreationTimestamp="2026-01-27 19:42:24 +0000 UTC" firstStartedPulling="2026-01-27 19:42:25.344660376 +0000 UTC m=+9920.443633925" lastFinishedPulling="2026-01-27 19:42:34.846536862 +0000 UTC m=+9929.945510411" observedRunningTime="2026-01-27 19:42:36.455344695 +0000 UTC m=+9931.554318244" watchObservedRunningTime="2026-01-27 19:42:36.457924208 +0000 UTC m=+9931.556897757" Jan 27 19:42:38 crc kubenswrapper[5049]: I0127 19:42:38.969475 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mtjn2/crc-debug-hnn86"] Jan 27 19:42:38 crc kubenswrapper[5049]: I0127 19:42:38.971486 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:42:39 crc kubenswrapper[5049]: I0127 19:42:39.110068 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvrdg\" (UniqueName: \"kubernetes.io/projected/8219fb80-bc04-4227-bf3a-baeb0849ef43-kube-api-access-bvrdg\") pod \"crc-debug-hnn86\" (UID: \"8219fb80-bc04-4227-bf3a-baeb0849ef43\") " pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:42:39 crc kubenswrapper[5049]: I0127 19:42:39.110204 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8219fb80-bc04-4227-bf3a-baeb0849ef43-host\") pod \"crc-debug-hnn86\" (UID: \"8219fb80-bc04-4227-bf3a-baeb0849ef43\") " pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:42:39 crc kubenswrapper[5049]: I0127 19:42:39.211931 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvrdg\" (UniqueName: \"kubernetes.io/projected/8219fb80-bc04-4227-bf3a-baeb0849ef43-kube-api-access-bvrdg\") pod \"crc-debug-hnn86\" (UID: \"8219fb80-bc04-4227-bf3a-baeb0849ef43\") " pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:42:39 crc kubenswrapper[5049]: I0127 19:42:39.212022 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8219fb80-bc04-4227-bf3a-baeb0849ef43-host\") pod \"crc-debug-hnn86\" (UID: \"8219fb80-bc04-4227-bf3a-baeb0849ef43\") " pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:42:39 crc kubenswrapper[5049]: I0127 19:42:39.212195 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8219fb80-bc04-4227-bf3a-baeb0849ef43-host\") pod \"crc-debug-hnn86\" (UID: \"8219fb80-bc04-4227-bf3a-baeb0849ef43\") " pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:42:39 crc kubenswrapper[5049]: I0127 19:42:39.233724 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvrdg\" (UniqueName: \"kubernetes.io/projected/8219fb80-bc04-4227-bf3a-baeb0849ef43-kube-api-access-bvrdg\") pod \"crc-debug-hnn86\" (UID: \"8219fb80-bc04-4227-bf3a-baeb0849ef43\") " pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:42:39 crc kubenswrapper[5049]: I0127 19:42:39.296130 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:42:39 crc kubenswrapper[5049]: W0127 19:42:39.326357 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8219fb80_bc04_4227_bf3a_baeb0849ef43.slice/crio-773ade6848ce4439f6c5dc43e17e6c79863ba7629475094e9126586f14445f48 WatchSource:0}: Error finding container 773ade6848ce4439f6c5dc43e17e6c79863ba7629475094e9126586f14445f48: Status 404 returned error can't find the container with id 773ade6848ce4439f6c5dc43e17e6c79863ba7629475094e9126586f14445f48 Jan 27 19:42:39 crc kubenswrapper[5049]: I0127 19:42:39.462344 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/crc-debug-hnn86" event={"ID":"8219fb80-bc04-4227-bf3a-baeb0849ef43","Type":"ContainerStarted","Data":"773ade6848ce4439f6c5dc43e17e6c79863ba7629475094e9126586f14445f48"} Jan 27 19:42:43 crc kubenswrapper[5049]: I0127 19:42:43.975805 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t656w"] Jan 27 19:42:43 crc kubenswrapper[5049]: I0127 19:42:43.979526 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:43 crc kubenswrapper[5049]: I0127 19:42:43.993413 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t656w"] Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.122882 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zqm\" (UniqueName: \"kubernetes.io/projected/f5177703-db43-48d8-ada0-2f8d79bd7061-kube-api-access-54zqm\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.122963 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-catalog-content\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.122982 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-utilities\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.224706 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54zqm\" (UniqueName: \"kubernetes.io/projected/f5177703-db43-48d8-ada0-2f8d79bd7061-kube-api-access-54zqm\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.225277 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-catalog-content\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.225305 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-utilities\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.226019 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-utilities\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.226299 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-catalog-content\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.257650 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54zqm\" (UniqueName: \"kubernetes.io/projected/f5177703-db43-48d8-ada0-2f8d79bd7061-kube-api-access-54zqm\") pod \"redhat-operators-t656w\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:44 crc kubenswrapper[5049]: I0127 19:42:44.309520 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:42:52 crc kubenswrapper[5049]: I0127 19:42:52.175454 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t656w"] Jan 27 19:42:52 crc kubenswrapper[5049]: W0127 19:42:52.177705 5049 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5177703_db43_48d8_ada0_2f8d79bd7061.slice/crio-13aaacfe97b60dc6a4756f5ec380de042670f989f8d82443cabe9753089510f8 WatchSource:0}: Error finding container 13aaacfe97b60dc6a4756f5ec380de042670f989f8d82443cabe9753089510f8: Status 404 returned error can't find the container with id 13aaacfe97b60dc6a4756f5ec380de042670f989f8d82443cabe9753089510f8 Jan 27 19:42:52 crc kubenswrapper[5049]: I0127 19:42:52.576117 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/crc-debug-hnn86" event={"ID":"8219fb80-bc04-4227-bf3a-baeb0849ef43","Type":"ContainerStarted","Data":"6ea4ef9dec070e37e24999512142491c880bba70dd78f7d0e7673bf1189f06be"} Jan 27 19:42:52 crc kubenswrapper[5049]: I0127 19:42:52.577371 5049 generic.go:334] "Generic (PLEG): container finished" podID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerID="0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7" exitCode=0 Jan 27 19:42:52 crc kubenswrapper[5049]: I0127 19:42:52.577404 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t656w" event={"ID":"f5177703-db43-48d8-ada0-2f8d79bd7061","Type":"ContainerDied","Data":"0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7"} Jan 27 19:42:52 crc kubenswrapper[5049]: I0127 19:42:52.577419 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t656w" event={"ID":"f5177703-db43-48d8-ada0-2f8d79bd7061","Type":"ContainerStarted","Data":"13aaacfe97b60dc6a4756f5ec380de042670f989f8d82443cabe9753089510f8"} Jan 27 19:42:52 crc kubenswrapper[5049]: I0127 19:42:52.600920 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mtjn2/crc-debug-hnn86" podStartSLOduration=2.306226432 podStartE2EDuration="14.600885723s" podCreationTimestamp="2026-01-27 19:42:38 +0000 UTC" firstStartedPulling="2026-01-27 19:42:39.328523787 +0000 UTC m=+9934.427497336" lastFinishedPulling="2026-01-27 19:42:51.623183078 +0000 UTC m=+9946.722156627" observedRunningTime="2026-01-27 19:42:52.592527326 +0000 UTC m=+9947.691500875" watchObservedRunningTime="2026-01-27 19:42:52.600885723 +0000 UTC m=+9947.699859312" Jan 27 19:42:54 crc kubenswrapper[5049]: I0127 19:42:54.599637 5049 generic.go:334] "Generic (PLEG): container finished" podID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerID="27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce" exitCode=0 Jan 27 19:42:54 crc kubenswrapper[5049]: I0127 19:42:54.599769 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t656w" event={"ID":"f5177703-db43-48d8-ada0-2f8d79bd7061","Type":"ContainerDied","Data":"27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce"} Jan 27 19:42:56 crc kubenswrapper[5049]: I0127 19:42:56.617754 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t656w" event={"ID":"f5177703-db43-48d8-ada0-2f8d79bd7061","Type":"ContainerStarted","Data":"72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6"} Jan 27 19:42:56 crc kubenswrapper[5049]: I0127 19:42:56.639789 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t656w" podStartSLOduration=9.900138744 podStartE2EDuration="13.639767797s" podCreationTimestamp="2026-01-27 19:42:43 +0000 UTC" firstStartedPulling="2026-01-27 19:42:52.579120234 +0000 UTC m=+9947.678093783" lastFinishedPulling="2026-01-27 19:42:56.318749247 +0000 UTC m=+9951.417722836" observedRunningTime="2026-01-27 19:42:56.636285818 +0000 UTC m=+9951.735259397" watchObservedRunningTime="2026-01-27 19:42:56.639767797 +0000 UTC m=+9951.738741366" Jan 27 19:43:04 crc kubenswrapper[5049]: I0127 19:43:04.310278 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:43:04 crc kubenswrapper[5049]: I0127 19:43:04.310863 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:43:05 crc kubenswrapper[5049]: I0127 19:43:05.361163 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t656w" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="registry-server" probeResult="failure" output=< Jan 27 19:43:05 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 19:43:05 crc kubenswrapper[5049]: > Jan 27 19:43:14 crc kubenswrapper[5049]: I0127 19:43:14.764836 5049 generic.go:334] "Generic (PLEG): container finished" podID="8219fb80-bc04-4227-bf3a-baeb0849ef43" containerID="6ea4ef9dec070e37e24999512142491c880bba70dd78f7d0e7673bf1189f06be" exitCode=0 Jan 27 19:43:14 crc kubenswrapper[5049]: I0127 19:43:14.764922 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/crc-debug-hnn86" event={"ID":"8219fb80-bc04-4227-bf3a-baeb0849ef43","Type":"ContainerDied","Data":"6ea4ef9dec070e37e24999512142491c880bba70dd78f7d0e7673bf1189f06be"} Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.358309 5049 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t656w" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="registry-server" probeResult="failure" output=< Jan 27 19:43:15 crc kubenswrapper[5049]: timeout: failed to connect service ":50051" within 1s Jan 27 19:43:15 crc kubenswrapper[5049]: > Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.901961 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.937170 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mtjn2/crc-debug-hnn86"] Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.945205 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8219fb80-bc04-4227-bf3a-baeb0849ef43-host\") pod \"8219fb80-bc04-4227-bf3a-baeb0849ef43\" (UID: \"8219fb80-bc04-4227-bf3a-baeb0849ef43\") " Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.945316 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8219fb80-bc04-4227-bf3a-baeb0849ef43-host" (OuterVolumeSpecName: "host") pod "8219fb80-bc04-4227-bf3a-baeb0849ef43" (UID: "8219fb80-bc04-4227-bf3a-baeb0849ef43"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.945333 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvrdg\" (UniqueName: \"kubernetes.io/projected/8219fb80-bc04-4227-bf3a-baeb0849ef43-kube-api-access-bvrdg\") pod \"8219fb80-bc04-4227-bf3a-baeb0849ef43\" (UID: \"8219fb80-bc04-4227-bf3a-baeb0849ef43\") " Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.945880 5049 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8219fb80-bc04-4227-bf3a-baeb0849ef43-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.946751 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mtjn2/crc-debug-hnn86"] Jan 27 19:43:15 crc kubenswrapper[5049]: I0127 19:43:15.952866 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8219fb80-bc04-4227-bf3a-baeb0849ef43-kube-api-access-bvrdg" (OuterVolumeSpecName: "kube-api-access-bvrdg") pod "8219fb80-bc04-4227-bf3a-baeb0849ef43" (UID: "8219fb80-bc04-4227-bf3a-baeb0849ef43"). InnerVolumeSpecName "kube-api-access-bvrdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:16 crc kubenswrapper[5049]: I0127 19:43:16.048104 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvrdg\" (UniqueName: \"kubernetes.io/projected/8219fb80-bc04-4227-bf3a-baeb0849ef43-kube-api-access-bvrdg\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:16 crc kubenswrapper[5049]: I0127 19:43:16.781843 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="773ade6848ce4439f6c5dc43e17e6c79863ba7629475094e9126586f14445f48" Jan 27 19:43:16 crc kubenswrapper[5049]: I0127 19:43:16.781947 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/crc-debug-hnn86" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.101121 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mtjn2/crc-debug-67tzg"] Jan 27 19:43:17 crc kubenswrapper[5049]: E0127 19:43:17.102634 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8219fb80-bc04-4227-bf3a-baeb0849ef43" containerName="container-00" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.102762 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="8219fb80-bc04-4227-bf3a-baeb0849ef43" containerName="container-00" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.103060 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="8219fb80-bc04-4227-bf3a-baeb0849ef43" containerName="container-00" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.103818 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.168527 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/756b519e-6d96-4d54-94ec-f58fed18fb2c-host\") pod \"crc-debug-67tzg\" (UID: \"756b519e-6d96-4d54-94ec-f58fed18fb2c\") " pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.168848 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcv7f\" (UniqueName: \"kubernetes.io/projected/756b519e-6d96-4d54-94ec-f58fed18fb2c-kube-api-access-jcv7f\") pod \"crc-debug-67tzg\" (UID: \"756b519e-6d96-4d54-94ec-f58fed18fb2c\") " pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.271135 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/756b519e-6d96-4d54-94ec-f58fed18fb2c-host\") pod \"crc-debug-67tzg\" (UID: \"756b519e-6d96-4d54-94ec-f58fed18fb2c\") " pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.271197 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcv7f\" (UniqueName: \"kubernetes.io/projected/756b519e-6d96-4d54-94ec-f58fed18fb2c-kube-api-access-jcv7f\") pod \"crc-debug-67tzg\" (UID: \"756b519e-6d96-4d54-94ec-f58fed18fb2c\") " pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.271253 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/756b519e-6d96-4d54-94ec-f58fed18fb2c-host\") pod \"crc-debug-67tzg\" (UID: \"756b519e-6d96-4d54-94ec-f58fed18fb2c\") " pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.295459 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcv7f\" (UniqueName: \"kubernetes.io/projected/756b519e-6d96-4d54-94ec-f58fed18fb2c-kube-api-access-jcv7f\") pod \"crc-debug-67tzg\" (UID: \"756b519e-6d96-4d54-94ec-f58fed18fb2c\") " pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.422195 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.657291 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8219fb80-bc04-4227-bf3a-baeb0849ef43" path="/var/lib/kubelet/pods/8219fb80-bc04-4227-bf3a-baeb0849ef43/volumes" Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.791895 5049 generic.go:334] "Generic (PLEG): container finished" podID="756b519e-6d96-4d54-94ec-f58fed18fb2c" containerID="3d46b9aced7528e361f942962bbd2f7c84f1a9b1b77603b3493efdc54ce00c0d" exitCode=1 Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.791943 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/crc-debug-67tzg" event={"ID":"756b519e-6d96-4d54-94ec-f58fed18fb2c","Type":"ContainerDied","Data":"3d46b9aced7528e361f942962bbd2f7c84f1a9b1b77603b3493efdc54ce00c0d"} Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.791973 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/crc-debug-67tzg" event={"ID":"756b519e-6d96-4d54-94ec-f58fed18fb2c","Type":"ContainerStarted","Data":"7f3a6d13f0550decaa0ff8f3bd0443c78f5d063c9fc0ade9b73bdb5467981634"} Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.829823 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mtjn2/crc-debug-67tzg"] Jan 27 19:43:17 crc kubenswrapper[5049]: I0127 19:43:17.838492 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mtjn2/crc-debug-67tzg"] Jan 27 19:43:18 crc kubenswrapper[5049]: I0127 19:43:18.909851 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.025556 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcv7f\" (UniqueName: \"kubernetes.io/projected/756b519e-6d96-4d54-94ec-f58fed18fb2c-kube-api-access-jcv7f\") pod \"756b519e-6d96-4d54-94ec-f58fed18fb2c\" (UID: \"756b519e-6d96-4d54-94ec-f58fed18fb2c\") " Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.026015 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/756b519e-6d96-4d54-94ec-f58fed18fb2c-host\") pod \"756b519e-6d96-4d54-94ec-f58fed18fb2c\" (UID: \"756b519e-6d96-4d54-94ec-f58fed18fb2c\") " Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.026165 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/756b519e-6d96-4d54-94ec-f58fed18fb2c-host" (OuterVolumeSpecName: "host") pod "756b519e-6d96-4d54-94ec-f58fed18fb2c" (UID: "756b519e-6d96-4d54-94ec-f58fed18fb2c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.026611 5049 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/756b519e-6d96-4d54-94ec-f58fed18fb2c-host\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.038429 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756b519e-6d96-4d54-94ec-f58fed18fb2c-kube-api-access-jcv7f" (OuterVolumeSpecName: "kube-api-access-jcv7f") pod "756b519e-6d96-4d54-94ec-f58fed18fb2c" (UID: "756b519e-6d96-4d54-94ec-f58fed18fb2c"). InnerVolumeSpecName "kube-api-access-jcv7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.128621 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcv7f\" (UniqueName: \"kubernetes.io/projected/756b519e-6d96-4d54-94ec-f58fed18fb2c-kube-api-access-jcv7f\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.662451 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756b519e-6d96-4d54-94ec-f58fed18fb2c" path="/var/lib/kubelet/pods/756b519e-6d96-4d54-94ec-f58fed18fb2c/volumes" Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.816667 5049 scope.go:117] "RemoveContainer" containerID="3d46b9aced7528e361f942962bbd2f7c84f1a9b1b77603b3493efdc54ce00c0d" Jan 27 19:43:19 crc kubenswrapper[5049]: I0127 19:43:19.816742 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/crc-debug-67tzg" Jan 27 19:43:24 crc kubenswrapper[5049]: I0127 19:43:24.383046 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:43:24 crc kubenswrapper[5049]: I0127 19:43:24.446482 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:43:24 crc kubenswrapper[5049]: I0127 19:43:24.632970 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t656w"] Jan 27 19:43:25 crc kubenswrapper[5049]: I0127 19:43:25.865957 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t656w" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="registry-server" containerID="cri-o://72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6" gracePeriod=2 Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.471034 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.569809 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-utilities\") pod \"f5177703-db43-48d8-ada0-2f8d79bd7061\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.570095 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-catalog-content\") pod \"f5177703-db43-48d8-ada0-2f8d79bd7061\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.570376 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54zqm\" (UniqueName: \"kubernetes.io/projected/f5177703-db43-48d8-ada0-2f8d79bd7061-kube-api-access-54zqm\") pod \"f5177703-db43-48d8-ada0-2f8d79bd7061\" (UID: \"f5177703-db43-48d8-ada0-2f8d79bd7061\") " Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.572830 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-utilities" (OuterVolumeSpecName: "utilities") pod "f5177703-db43-48d8-ada0-2f8d79bd7061" (UID: "f5177703-db43-48d8-ada0-2f8d79bd7061"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.578166 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5177703-db43-48d8-ada0-2f8d79bd7061-kube-api-access-54zqm" (OuterVolumeSpecName: "kube-api-access-54zqm") pod "f5177703-db43-48d8-ada0-2f8d79bd7061" (UID: "f5177703-db43-48d8-ada0-2f8d79bd7061"). InnerVolumeSpecName "kube-api-access-54zqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.673871 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.673941 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54zqm\" (UniqueName: \"kubernetes.io/projected/f5177703-db43-48d8-ada0-2f8d79bd7061-kube-api-access-54zqm\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.691454 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5177703-db43-48d8-ada0-2f8d79bd7061" (UID: "f5177703-db43-48d8-ada0-2f8d79bd7061"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.775487 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5177703-db43-48d8-ada0-2f8d79bd7061-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.877877 5049 generic.go:334] "Generic (PLEG): container finished" podID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerID="72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6" exitCode=0 Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.877937 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t656w" event={"ID":"f5177703-db43-48d8-ada0-2f8d79bd7061","Type":"ContainerDied","Data":"72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6"} Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.879016 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t656w" event={"ID":"f5177703-db43-48d8-ada0-2f8d79bd7061","Type":"ContainerDied","Data":"13aaacfe97b60dc6a4756f5ec380de042670f989f8d82443cabe9753089510f8"} Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.878002 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t656w" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.879062 5049 scope.go:117] "RemoveContainer" containerID="72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.902324 5049 scope.go:117] "RemoveContainer" containerID="27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.923521 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t656w"] Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.933154 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t656w"] Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.952197 5049 scope.go:117] "RemoveContainer" containerID="0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.983929 5049 scope.go:117] "RemoveContainer" containerID="72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6" Jan 27 19:43:26 crc kubenswrapper[5049]: E0127 19:43:26.984532 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6\": container with ID starting with 72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6 not found: ID does not exist" containerID="72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.984585 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6"} err="failed to get container status \"72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6\": rpc error: code = NotFound desc = could not find container \"72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6\": container with ID starting with 72aa564a16b53883df0f8aeedb4f0a7ff7ad5bb1e017a4b8855255596bef86d6 not found: ID does not exist" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.984618 5049 scope.go:117] "RemoveContainer" containerID="27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce" Jan 27 19:43:26 crc kubenswrapper[5049]: E0127 19:43:26.985112 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce\": container with ID starting with 27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce not found: ID does not exist" containerID="27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.985408 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce"} err="failed to get container status \"27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce\": rpc error: code = NotFound desc = could not find container \"27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce\": container with ID starting with 27aea46b58e76a82f39c3463a31c1a31abbfcda3347483de7ac0b7c37c3bc9ce not found: ID does not exist" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.985523 5049 scope.go:117] "RemoveContainer" containerID="0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7" Jan 27 19:43:26 crc kubenswrapper[5049]: E0127 19:43:26.985990 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7\": container with ID starting with 0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7 not found: ID does not exist" containerID="0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7" Jan 27 19:43:26 crc kubenswrapper[5049]: I0127 19:43:26.986043 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7"} err="failed to get container status \"0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7\": rpc error: code = NotFound desc = could not find container \"0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7\": container with ID starting with 0b51e442a6157a4a23fc018e7ca5ef505d659dd02cc34a87f8fa6acac6552da7 not found: ID does not exist" Jan 27 19:43:27 crc kubenswrapper[5049]: I0127 19:43:27.657028 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" path="/var/lib/kubelet/pods/f5177703-db43-48d8-ada0-2f8d79bd7061/volumes" Jan 27 19:43:47 crc kubenswrapper[5049]: I0127 19:43:47.208332 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59c589b678-6wqpn_21fc71b8-47e1-410a-aa00-1e365cca5af7/barbican-api/0.log" Jan 27 19:43:47 crc kubenswrapper[5049]: I0127 19:43:47.408493 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59c589b678-6wqpn_21fc71b8-47e1-410a-aa00-1e365cca5af7/barbican-api-log/0.log" Jan 27 19:43:47 crc kubenswrapper[5049]: I0127 19:43:47.528025 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55c49fb878-2ctcc_e9142e28-4c00-4dc0-b0f0-0370cd638740/barbican-keystone-listener/0.log" Jan 27 19:43:47 crc kubenswrapper[5049]: I0127 19:43:47.582157 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-55c49fb878-2ctcc_e9142e28-4c00-4dc0-b0f0-0370cd638740/barbican-keystone-listener-log/0.log" Jan 27 19:43:47 crc kubenswrapper[5049]: I0127 19:43:47.717855 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-979c4cfc7-x5nfk_103768dd-1e58-4bab-88df-808576121cb4/barbican-worker-log/0.log" Jan 27 19:43:47 crc kubenswrapper[5049]: I0127 19:43:47.719513 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-979c4cfc7-x5nfk_103768dd-1e58-4bab-88df-808576121cb4/barbican-worker/0.log" Jan 27 19:43:47 crc kubenswrapper[5049]: I0127 19:43:47.938711 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3756b8fa-b794-4021-b68f-1bb730f59b03/cinder-api-log/0.log" Jan 27 19:43:47 crc kubenswrapper[5049]: I0127 19:43:47.961456 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_3756b8fa-b794-4021-b68f-1bb730f59b03/cinder-api/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.177818 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ae55fc0c-fd54-4e6a-a3ae-89d2e764d789/probe/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.202358 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_62aac37e-a0bb-428e-b094-71a7fd36f533/cinder-scheduler/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.234723 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_ae55fc0c-fd54-4e6a-a3ae-89d2e764d789/cinder-backup/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.478180 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_62aac37e-a0bb-428e-b094-71a7fd36f533/probe/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.482529 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_bbe41245-d028-4fc4-bce8-d166cb88403d/probe/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.507567 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_bbe41245-d028-4fc4-bce8-d166cb88403d/cinder-volume/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.667363 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5cbd8fbdcc-jwtjd_ada00ff1-234e-426a-8867-2a885fd955e1/init/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.805807 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5cbd8fbdcc-jwtjd_ada00ff1-234e-426a-8867-2a885fd955e1/init/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.837560 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5cbd8fbdcc-jwtjd_ada00ff1-234e-426a-8867-2a885fd955e1/dnsmasq-dns/0.log" Jan 27 19:43:48 crc kubenswrapper[5049]: I0127 19:43:48.862528 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_47f141fd-e32a-4605-84c5-f5af36c92ad3/glance-httpd/0.log" Jan 27 19:43:49 crc kubenswrapper[5049]: I0127 19:43:49.010110 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_47f141fd-e32a-4605-84c5-f5af36c92ad3/glance-log/0.log" Jan 27 19:43:49 crc kubenswrapper[5049]: I0127 19:43:49.080417 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fa6051da-b458-4c9e-80ac-4a1a64bcb1ec/glance-httpd/0.log" Jan 27 19:43:49 crc kubenswrapper[5049]: I0127 19:43:49.112504 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_fa6051da-b458-4c9e-80ac-4a1a64bcb1ec/glance-log/0.log" Jan 27 19:43:49 crc kubenswrapper[5049]: I0127 19:43:49.417332 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492341-cb5bv_40aa2585-6ba9-44c4-9511-816f01c80de1/keystone-cron/0.log" Jan 27 19:43:49 crc kubenswrapper[5049]: I0127 19:43:49.417520 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f8ddf49c9-t4b7l_a1e6d26a-84ec-4812-8b5e-82ed7beb6f9f/keystone-api/0.log" Jan 27 19:43:49 crc kubenswrapper[5049]: I0127 19:43:49.645723 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_2c1ba67d-2fb8-42a5-a89b-12e3245907ed/adoption/0.log" Jan 27 19:43:49 crc kubenswrapper[5049]: I0127 19:43:49.982042 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d4c98d6f7-xq4dq_78274e9b-3cec-41ef-aed1-92296bc999a6/neutron-api/0.log" Jan 27 19:43:50 crc kubenswrapper[5049]: I0127 19:43:50.046478 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d4c98d6f7-xq4dq_78274e9b-3cec-41ef-aed1-92296bc999a6/neutron-httpd/0.log" Jan 27 19:43:50 crc kubenswrapper[5049]: I0127 19:43:50.356234 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6b88b020-f951-4293-9659-b10b64dd2aad/nova-api-api/0.log" Jan 27 19:43:50 crc kubenswrapper[5049]: I0127 19:43:50.528046 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6b88b020-f951-4293-9659-b10b64dd2aad/nova-api-log/0.log" Jan 27 19:43:50 crc kubenswrapper[5049]: I0127 19:43:50.719523 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_831a6998-fc4d-44d4-bf18-e75f37c02c3e/nova-cell0-conductor-conductor/0.log" Jan 27 19:43:50 crc kubenswrapper[5049]: I0127 19:43:50.895875 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_69b362a6-4c77-4063-aeb1-4884ef4eaf46/nova-cell1-conductor-conductor/0.log" Jan 27 19:43:51 crc kubenswrapper[5049]: I0127 19:43:51.180074 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0a36975d-06df-4a40-9e9a-e14b781ee58f/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 19:43:51 crc kubenswrapper[5049]: I0127 19:43:51.269545 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a32c6dea-a530-4aae-91fc-e4de8443aadf/nova-metadata-log/0.log" Jan 27 19:43:51 crc kubenswrapper[5049]: I0127 19:43:51.543978 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a32c6dea-a530-4aae-91fc-e4de8443aadf/nova-metadata-metadata/0.log" Jan 27 19:43:51 crc kubenswrapper[5049]: I0127 19:43:51.591930 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b5f3f019-3ad3-4602-9dda-409b8370843b/nova-scheduler-scheduler/0.log" Jan 27 19:43:51 crc kubenswrapper[5049]: I0127 19:43:51.599814 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5f6787874b-szhcb_ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c/init/0.log" Jan 27 19:43:51 crc kubenswrapper[5049]: I0127 19:43:51.939531 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5f6787874b-szhcb_ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c/octavia-api-provider-agent/0.log" Jan 27 19:43:52 crc kubenswrapper[5049]: I0127 19:43:52.098124 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5f6787874b-szhcb_ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c/init/0.log" Jan 27 19:43:52 crc kubenswrapper[5049]: I0127 19:43:52.143066 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-5f6787874b-szhcb_ba4d9c2a-a97d-4e3b-82e3-290d7a0c607c/octavia-api/0.log" Jan 27 19:43:52 crc kubenswrapper[5049]: I0127 19:43:52.228459 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-nhbx4_7f696337-d60b-4e22-be1f-a5bb48af356b/init/0.log" Jan 27 19:43:52 crc kubenswrapper[5049]: I0127 19:43:52.771544 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-l6mbc_564233a0-017c-4e34-8fd6-0102fd4063c0/init/0.log" Jan 27 19:43:52 crc kubenswrapper[5049]: I0127 19:43:52.774583 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-nhbx4_7f696337-d60b-4e22-be1f-a5bb48af356b/init/0.log" Jan 27 19:43:52 crc kubenswrapper[5049]: I0127 19:43:52.885378 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-nhbx4_7f696337-d60b-4e22-be1f-a5bb48af356b/octavia-healthmanager/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.041230 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-l6mbc_564233a0-017c-4e34-8fd6-0102fd4063c0/init/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.106008 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-l6mbc_564233a0-017c-4e34-8fd6-0102fd4063c0/octavia-housekeeping/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.158770 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-ms28h_3c9f2e6f-2581-4298-afc7-64c5424ebd56/init/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.337751 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-ms28h_3c9f2e6f-2581-4298-afc7-64c5424ebd56/init/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.358634 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-ms28h_3c9f2e6f-2581-4298-afc7-64c5424ebd56/octavia-amphora-httpd/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.445778 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-zbfsr_931851bb-1d01-4828-9eb9-a45836710020/init/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.658240 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-zbfsr_931851bb-1d01-4828-9eb9-a45836710020/octavia-rsyslog/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.672842 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-zbfsr_931851bb-1d01-4828-9eb9-a45836710020/init/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.675181 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-9gnt2_fa152fbd-d248-415c-a6bd-e97978297a6d/init/0.log" Jan 27 19:43:53 crc kubenswrapper[5049]: I0127 19:43:53.864592 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-9gnt2_fa152fbd-d248-415c-a6bd-e97978297a6d/init/0.log" Jan 27 19:43:54 crc kubenswrapper[5049]: I0127 19:43:54.007911 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca/mysql-bootstrap/0.log" Jan 27 19:43:54 crc kubenswrapper[5049]: I0127 19:43:54.098432 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-9gnt2_fa152fbd-d248-415c-a6bd-e97978297a6d/octavia-worker/0.log" Jan 27 19:43:54 crc kubenswrapper[5049]: I0127 19:43:54.552038 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca/mysql-bootstrap/0.log" Jan 27 19:43:54 crc kubenswrapper[5049]: I0127 19:43:54.658075 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6b2b0ceb-df56-437d-a7e1-e79a57a7e5ca/galera/0.log" Jan 27 19:43:54 crc kubenswrapper[5049]: I0127 19:43:54.736805 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_62237ec0-8150-441d-ad88-6b48f26a9aa7/mysql-bootstrap/0.log" Jan 27 19:43:54 crc kubenswrapper[5049]: I0127 19:43:54.958363 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_62237ec0-8150-441d-ad88-6b48f26a9aa7/mysql-bootstrap/0.log" Jan 27 19:43:54 crc kubenswrapper[5049]: I0127 19:43:54.980841 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_260481f1-9af6-4824-bf84-18e080e5e1a6/openstackclient/0.log" Jan 27 19:43:54 crc kubenswrapper[5049]: I0127 19:43:54.988522 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_62237ec0-8150-441d-ad88-6b48f26a9aa7/galera/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.165173 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-grm4c_30bb024b-8bc7-45cf-b794-5b9039e4b334/ovn-controller/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.185100 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8f2a2869-0227-4632-98b0-faced10a3a7d/memcached/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.256389 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2zf4b_6e5eaf59-09f4-4908-802b-9e1e58f6aa11/openstack-network-exporter/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.386231 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fqwht_355c8da6-76d2-4246-b327-403f1c9aa64c/ovsdb-server-init/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.531864 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fqwht_355c8da6-76d2-4246-b327-403f1c9aa64c/ovsdb-server/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.546649 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fqwht_355c8da6-76d2-4246-b327-403f1c9aa64c/ovsdb-server-init/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.549152 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-fqwht_355c8da6-76d2-4246-b327-403f1c9aa64c/ovs-vswitchd/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.653832 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_3c07f615-4864-4724-b449-b5c91c539778/adoption/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.713340 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4b3f72b8-e2df-430e-b81c-b2d59bf6b022/openstack-network-exporter/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.751603 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4b3f72b8-e2df-430e-b81c-b2d59bf6b022/ovn-northd/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.860019 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f05b7e7d-147f-44c9-a1a0-3bf20a581668/openstack-network-exporter/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.912264 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f05b7e7d-147f-44c9-a1a0-3bf20a581668/ovsdbserver-nb/0.log" Jan 27 19:43:55 crc kubenswrapper[5049]: I0127 19:43:55.992450 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_3e8ced19-22b2-4493-bd82-284b419a8045/openstack-network-exporter/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.096194 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_3e8ced19-22b2-4493-bd82-284b419a8045/ovsdbserver-nb/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.126493 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_26d69dfa-16a7-4f78-89e8-4786c7efbfa4/openstack-network-exporter/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.207143 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_26d69dfa-16a7-4f78-89e8-4786c7efbfa4/ovsdbserver-nb/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.292437 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae4f99ce-4541-4d85-a1a6-64a8295dbd37/openstack-network-exporter/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.370061 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ae4f99ce-4541-4d85-a1a6-64a8295dbd37/ovsdbserver-sb/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.554447 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_6d2aff60-fc81-4c03-8ad4-6555e3a3a41d/openstack-network-exporter/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.554564 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_6d2aff60-fc81-4c03-8ad4-6555e3a3a41d/ovsdbserver-sb/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.682902 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_8bd16677-2b72-4120-b689-ce563651bfe9/openstack-network-exporter/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.736187 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_8bd16677-2b72-4120-b689-ce563651bfe9/ovsdbserver-sb/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.792262 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79994c87b8-v7w7s_45e8b898-a4dd-4f5c-be7d-e849fd6530ec/placement-api/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.969893 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-79994c87b8-v7w7s_45e8b898-a4dd-4f5c-be7d-e849fd6530ec/placement-log/0.log" Jan 27 19:43:56 crc kubenswrapper[5049]: I0127 19:43:56.973150 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_19000626-d51a-422d-b61d-caec78fc08ad/setup-container/0.log" Jan 27 19:43:57 crc kubenswrapper[5049]: I0127 19:43:57.124748 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_19000626-d51a-422d-b61d-caec78fc08ad/setup-container/0.log" Jan 27 19:43:57 crc kubenswrapper[5049]: I0127 19:43:57.139301 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_19000626-d51a-422d-b61d-caec78fc08ad/rabbitmq/0.log" Jan 27 19:43:57 crc kubenswrapper[5049]: I0127 19:43:57.166753 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b46d5528-2364-438e-8b91-18a085b8c625/setup-container/0.log" Jan 27 19:43:57 crc kubenswrapper[5049]: I0127 19:43:57.362544 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b46d5528-2364-438e-8b91-18a085b8c625/rabbitmq/0.log" Jan 27 19:43:57 crc kubenswrapper[5049]: I0127 19:43:57.375456 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b46d5528-2364-438e-8b91-18a085b8c625/setup-container/0.log" Jan 27 19:44:17 crc kubenswrapper[5049]: I0127 19:44:17.663862 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx_78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9/util/0.log" Jan 27 19:44:17 crc kubenswrapper[5049]: I0127 19:44:17.853980 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx_78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9/pull/0.log" Jan 27 19:44:17 crc kubenswrapper[5049]: I0127 19:44:17.912420 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx_78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9/pull/0.log" Jan 27 19:44:17 crc kubenswrapper[5049]: I0127 19:44:17.937947 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx_78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9/util/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.062849 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx_78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9/pull/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.078950 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx_78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9/util/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.171714 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5ab3bbf6468419996f05a174dfc3149a8a9e7a054846a6cd382c2b8115vvndx_78a43db5-17e9-4d85-88b1-0ebe1ef3a1b9/extract/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.376473 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-8d8dj_cf5055fa-99ac-4063-b192-5743f331b01a/manager/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.403498 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-6fxd4_1046a5c5-7064-4f3d-8a27-4c70edefff18/manager/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.618620 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-pb687_713c4f3d-0a16-43b1-a9ba-52f2905863b7/manager/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.720980 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-6gzdm_6ecd49cd-b0f1-40f9-80b6-5f0fedc99b97/manager/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.816190 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-575ffb885b-jbx77_5821c221-c684-480a-a174-2154d785d9be/manager/0.log" Jan 27 19:44:18 crc kubenswrapper[5049]: I0127 19:44:18.895938 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-hzzj2_61d8b171-8120-44cb-a074-54ea2aea3735/manager/0.log" Jan 27 19:44:19 crc kubenswrapper[5049]: I0127 19:44:19.128040 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-pqbc2_b58745a3-9e9d-4337-8965-6caf2ade0bdd/manager/0.log" Jan 27 19:44:19 crc kubenswrapper[5049]: I0127 19:44:19.416736 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-tdbpz_d5477f31-e31c-47a2-bbaf-543196a1908e/manager/0.log" Jan 27 19:44:19 crc kubenswrapper[5049]: I0127 19:44:19.441500 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-qjftl_f401ecb9-876a-4bed-9848-1ab332f71010/manager/0.log" Jan 27 19:44:19 crc kubenswrapper[5049]: I0127 19:44:19.534818 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-hdd85_61d18054-36c7-4e08-a20d-7dd2bb853959/manager/0.log" Jan 27 19:44:19 crc kubenswrapper[5049]: I0127 19:44:19.644199 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-45drh_03f2c7d7-4a4d-479c-aace-8cb3f75f5a34/manager/0.log" Jan 27 19:44:19 crc kubenswrapper[5049]: I0127 19:44:19.759222 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-jmr5r_596a2bcb-e21f-4ea7-ac0d-1b1f313b7e82/manager/0.log" Jan 27 19:44:20 crc kubenswrapper[5049]: I0127 19:44:20.013376 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-zxh24_1d2513e9-4b0c-4bf3-9fed-81a347f8e5bf/manager/0.log" Jan 27 19:44:20 crc kubenswrapper[5049]: I0127 19:44:20.056918 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-ddcbfd695-24wdf_89eae96a-ae74-441c-b4f4-6423b01e11c9/manager/0.log" Jan 27 19:44:20 crc kubenswrapper[5049]: I0127 19:44:20.185042 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8548mcwt_954304ce-c36e-4eec-989f-a56c4d63f97e/manager/0.log" Jan 27 19:44:20 crc kubenswrapper[5049]: I0127 19:44:20.355536 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7f484b79bf-6tn6x_e89211f0-4414-462f-b634-68ebf429f864/operator/0.log" Jan 27 19:44:20 crc kubenswrapper[5049]: I0127 19:44:20.986591 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fxk5l_385c67fc-30f8-409c-b59e-d5d7182730c8/registry-server/0.log" Jan 27 19:44:21 crc kubenswrapper[5049]: I0127 19:44:21.353854 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-b7gkd_640dbea5-d940-4917-ba99-8b506007a8c8/manager/0.log" Jan 27 19:44:21 crc kubenswrapper[5049]: I0127 19:44:21.496090 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-shfpw_ac863301-b663-4e76-83af-5b1596a19d5a/manager/0.log" Jan 27 19:44:21 crc kubenswrapper[5049]: I0127 19:44:21.573025 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-26ckr_89c673d2-c27b-4b39-bb48-b463d5626491/operator/0.log" Jan 27 19:44:21 crc kubenswrapper[5049]: I0127 19:44:21.720763 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-cz6ks_42f93a5d-5678-4e1e-b5a0-1bd0017dab7c/manager/0.log" Jan 27 19:44:21 crc kubenswrapper[5049]: I0127 19:44:21.872188 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-dzmjg_23d834fc-5840-49a4-aa49-5a84e8490e39/manager/0.log" Jan 27 19:44:21 crc kubenswrapper[5049]: I0127 19:44:21.963212 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-qrvc5_63489758-4a49-40f2-8886-7c09cc103f40/manager/0.log" Jan 27 19:44:22 crc kubenswrapper[5049]: I0127 19:44:22.065906 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c9bb4b66c-rxn7s_0b239d42-1dea-4559-8cdd-8db8cb8addab/manager/0.log" Jan 27 19:44:22 crc kubenswrapper[5049]: I0127 19:44:22.566327 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-556b4c5b88-vhwl2_d0a4b955-b7b5-45be-a997-7ed2d360218e/manager/0.log" Jan 27 19:44:43 crc kubenswrapper[5049]: I0127 19:44:43.135975 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-h8z99_0092259f-7233-4db1-9ed5-667deb592e96/control-plane-machine-set-operator/0.log" Jan 27 19:44:43 crc kubenswrapper[5049]: I0127 19:44:43.314317 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-c9dvb_e4d4c630-42d2-490d-8782-1fdb7723181d/kube-rbac-proxy/0.log" Jan 27 19:44:43 crc kubenswrapper[5049]: I0127 19:44:43.332033 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-c9dvb_e4d4c630-42d2-490d-8782-1fdb7723181d/machine-api-operator/0.log" Jan 27 19:44:47 crc kubenswrapper[5049]: I0127 19:44:47.781853 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:44:47 crc kubenswrapper[5049]: I0127 19:44:47.782506 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:44:56 crc kubenswrapper[5049]: I0127 19:44:56.855172 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-g4n6q_6d85b59c-bed6-4dff-8ae5-cca3c210210e/cert-manager-controller/0.log" Jan 27 19:44:57 crc kubenswrapper[5049]: I0127 19:44:57.040760 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-wcq9n_4091c9ea-7f48-4824-867f-378f7a2a8c04/cert-manager-cainjector/0.log" Jan 27 19:44:57 crc kubenswrapper[5049]: I0127 19:44:57.060872 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-w4pq6_d57413ac-f34c-430e-9c96-18f0da415614/cert-manager-webhook/0.log" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.141180 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq"] Jan 27 19:45:00 crc kubenswrapper[5049]: E0127 19:45:00.142343 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756b519e-6d96-4d54-94ec-f58fed18fb2c" containerName="container-00" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.142358 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="756b519e-6d96-4d54-94ec-f58fed18fb2c" containerName="container-00" Jan 27 19:45:00 crc kubenswrapper[5049]: E0127 19:45:00.142377 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.142384 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="extract-content" Jan 27 19:45:00 crc kubenswrapper[5049]: E0127 19:45:00.142405 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.142411 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[5049]: E0127 19:45:00.142421 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.142426 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="extract-utilities" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.142630 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="756b519e-6d96-4d54-94ec-f58fed18fb2c" containerName="container-00" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.142655 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5177703-db43-48d8-ada0-2f8d79bd7061" containerName="registry-server" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.143343 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.146647 5049 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.147014 5049 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.154586 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq"] Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.171142 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn6ht\" (UniqueName: \"kubernetes.io/projected/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-kube-api-access-sn6ht\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.171326 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-config-volume\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.171356 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-secret-volume\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.273313 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-config-volume\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.273360 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-secret-volume\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.273467 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn6ht\" (UniqueName: \"kubernetes.io/projected/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-kube-api-access-sn6ht\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.274384 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-config-volume\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.294411 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-secret-volume\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.296840 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn6ht\" (UniqueName: \"kubernetes.io/projected/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-kube-api-access-sn6ht\") pod \"collect-profiles-29492385-td5gq\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:00 crc kubenswrapper[5049]: I0127 19:45:00.606133 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:01 crc kubenswrapper[5049]: I0127 19:45:01.107412 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq"] Jan 27 19:45:01 crc kubenswrapper[5049]: I0127 19:45:01.679308 5049 generic.go:334] "Generic (PLEG): container finished" podID="a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0" containerID="243b08510ba8e89dfa7f4ff423627b3e54667d9b383c184d1bad5fc6dc7b3a88" exitCode=0 Jan 27 19:45:01 crc kubenswrapper[5049]: I0127 19:45:01.679411 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" event={"ID":"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0","Type":"ContainerDied","Data":"243b08510ba8e89dfa7f4ff423627b3e54667d9b383c184d1bad5fc6dc7b3a88"} Jan 27 19:45:01 crc kubenswrapper[5049]: I0127 19:45:01.679614 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" event={"ID":"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0","Type":"ContainerStarted","Data":"d7b04ccb1184d15faf0b11fab423965097c253291c967eb98d25ec53a49720fe"} Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.009626 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.128487 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-config-volume\") pod \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.128573 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn6ht\" (UniqueName: \"kubernetes.io/projected/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-kube-api-access-sn6ht\") pod \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.129357 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0" (UID: "a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.130930 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-secret-volume\") pod \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\" (UID: \"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0\") " Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.131410 5049 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.147949 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-kube-api-access-sn6ht" (OuterVolumeSpecName: "kube-api-access-sn6ht") pod "a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0" (UID: "a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0"). InnerVolumeSpecName "kube-api-access-sn6ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.148561 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0" (UID: "a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.233122 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn6ht\" (UniqueName: \"kubernetes.io/projected/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-kube-api-access-sn6ht\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.233355 5049 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.708538 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" event={"ID":"a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0","Type":"ContainerDied","Data":"d7b04ccb1184d15faf0b11fab423965097c253291c967eb98d25ec53a49720fe"} Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.708579 5049 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b04ccb1184d15faf0b11fab423965097c253291c967eb98d25ec53a49720fe" Jan 27 19:45:03 crc kubenswrapper[5049]: I0127 19:45:03.708613 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492385-td5gq" Jan 27 19:45:04 crc kubenswrapper[5049]: I0127 19:45:04.074728 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt"] Jan 27 19:45:04 crc kubenswrapper[5049]: I0127 19:45:04.081165 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492340-wvsqt"] Jan 27 19:45:05 crc kubenswrapper[5049]: I0127 19:45:05.660805 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199c03ed-e9ab-4cff-b0e8-0374c1b6462e" path="/var/lib/kubelet/pods/199c03ed-e9ab-4cff-b0e8-0374c1b6462e/volumes" Jan 27 19:45:12 crc kubenswrapper[5049]: I0127 19:45:12.038842 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nhtv4_bf7a2b61-35ff-47f7-b2fa-65232d56e55e/nmstate-console-plugin/0.log" Jan 27 19:45:12 crc kubenswrapper[5049]: I0127 19:45:12.088542 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gc6fh_6058058d-b942-4659-b721-80eb7add600d/nmstate-handler/0.log" Jan 27 19:45:12 crc kubenswrapper[5049]: I0127 19:45:12.235582 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k94tc_419ed877-28f2-4a53-87d2-51c31c16a385/nmstate-metrics/0.log" Jan 27 19:45:12 crc kubenswrapper[5049]: I0127 19:45:12.235688 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-k94tc_419ed877-28f2-4a53-87d2-51c31c16a385/kube-rbac-proxy/0.log" Jan 27 19:45:12 crc kubenswrapper[5049]: I0127 19:45:12.408577 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-ww57s_c9167faa-e84f-454f-9628-071acd6f4e99/nmstate-operator/0.log" Jan 27 19:45:12 crc kubenswrapper[5049]: I0127 19:45:12.440709 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-wphmd_9c3a3ed4-fabe-48cb-86fa-b7474a8e9b8f/nmstate-webhook/0.log" Jan 27 19:45:17 crc kubenswrapper[5049]: I0127 19:45:17.781791 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:45:17 crc kubenswrapper[5049]: I0127 19:45:17.783935 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:45:21 crc kubenswrapper[5049]: I0127 19:45:21.479176 5049 scope.go:117] "RemoveContainer" containerID="72f0c4f8283bf3abd73edc031ce71b46ec841fe35127d68eb9f7ae5989a914e6" Jan 27 19:45:39 crc kubenswrapper[5049]: I0127 19:45:39.883030 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8ngsz_a03921d3-73b0-4197-bce2-1c931a7d784e/kube-rbac-proxy/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.231177 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-frr-files/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.295903 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8ngsz_a03921d3-73b0-4197-bce2-1c931a7d784e/controller/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.342077 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-frr-files/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.393184 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-reloader/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.448617 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-metrics/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.483033 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-reloader/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.622655 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-frr-files/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.624091 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-reloader/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.667593 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-metrics/0.log" Jan 27 19:45:40 crc kubenswrapper[5049]: I0127 19:45:40.687214 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-metrics/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.262406 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-metrics/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.270576 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-reloader/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.291611 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/controller/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.299544 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/cp-frr-files/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.459398 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/frr-metrics/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.476432 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/kube-rbac-proxy-frr/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.493973 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/kube-rbac-proxy/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.644837 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/reloader/0.log" Jan 27 19:45:41 crc kubenswrapper[5049]: I0127 19:45:41.743779 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bptwc_3e119304-dec2-43c2-8534-21be80f79b69/frr-k8s-webhook-server/0.log" Jan 27 19:45:42 crc kubenswrapper[5049]: I0127 19:45:42.153182 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6779f5b7c7-sntlr_382b1716-fd63-460f-a84f-0c37f695d08f/manager/0.log" Jan 27 19:45:42 crc kubenswrapper[5049]: I0127 19:45:42.313504 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7899c8c964-mnspp_36d6d6cd-b037-4b8b-af74-52af62b08fb6/webhook-server/0.log" Jan 27 19:45:42 crc kubenswrapper[5049]: I0127 19:45:42.431699 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-plq87_5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2/kube-rbac-proxy/0.log" Jan 27 19:45:43 crc kubenswrapper[5049]: I0127 19:45:43.970802 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-plq87_5f9b0a4d-dc3e-49bf-97fc-9de0e9c6d1b2/speaker/0.log" Jan 27 19:45:44 crc kubenswrapper[5049]: I0127 19:45:44.366400 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-cspr9_d6eb1833-4714-4674-b8c8-c8d367b09d77/frr/0.log" Jan 27 19:45:47 crc kubenswrapper[5049]: I0127 19:45:47.781949 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:45:47 crc kubenswrapper[5049]: I0127 19:45:47.782530 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:45:47 crc kubenswrapper[5049]: I0127 19:45:47.782574 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:45:47 crc kubenswrapper[5049]: I0127 19:45:47.783290 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a239abd78b727996fe25828bb1f7bd7d0fd88569e96b06f68db274a7a474a4a4"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:45:47 crc kubenswrapper[5049]: I0127 19:45:47.783360 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://a239abd78b727996fe25828bb1f7bd7d0fd88569e96b06f68db274a7a474a4a4" gracePeriod=600 Jan 27 19:45:48 crc kubenswrapper[5049]: I0127 19:45:48.119132 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="a239abd78b727996fe25828bb1f7bd7d0fd88569e96b06f68db274a7a474a4a4" exitCode=0 Jan 27 19:45:48 crc kubenswrapper[5049]: I0127 19:45:48.119178 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"a239abd78b727996fe25828bb1f7bd7d0fd88569e96b06f68db274a7a474a4a4"} Jan 27 19:45:48 crc kubenswrapper[5049]: I0127 19:45:48.119579 5049 scope.go:117] "RemoveContainer" containerID="ddd6f357e4eed5f675e65a422fbba2980032e2c257108714d2545d7ca2dd9b07" Jan 27 19:45:49 crc kubenswrapper[5049]: I0127 19:45:49.129434 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerStarted","Data":"7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb"} Jan 27 19:45:55 crc kubenswrapper[5049]: I0127 19:45:55.983468 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm_88a53601-dcde-4640-9bc6-5fbb919a8efd/util/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.192146 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm_88a53601-dcde-4640-9bc6-5fbb919a8efd/util/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.248247 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm_88a53601-dcde-4640-9bc6-5fbb919a8efd/pull/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.294193 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm_88a53601-dcde-4640-9bc6-5fbb919a8efd/pull/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.437018 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm_88a53601-dcde-4640-9bc6-5fbb919a8efd/util/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.472228 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm_88a53601-dcde-4640-9bc6-5fbb919a8efd/pull/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.495995 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ajxbqm_88a53601-dcde-4640-9bc6-5fbb919a8efd/extract/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.594415 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t_145acd4d-d458-4ea3-9abb-f5a58976ecf1/util/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.779739 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t_145acd4d-d458-4ea3-9abb-f5a58976ecf1/pull/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.826282 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t_145acd4d-d458-4ea3-9abb-f5a58976ecf1/util/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.827110 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t_145acd4d-d458-4ea3-9abb-f5a58976ecf1/pull/0.log" Jan 27 19:45:56 crc kubenswrapper[5049]: I0127 19:45:56.998334 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t_145acd4d-d458-4ea3-9abb-f5a58976ecf1/pull/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.000835 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t_145acd4d-d458-4ea3-9abb-f5a58976ecf1/extract/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.018419 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcjrt8t_145acd4d-d458-4ea3-9abb-f5a58976ecf1/util/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.178030 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5_16b53c32-6621-4bb5-b6c7-1b929414dd8c/util/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.355090 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5_16b53c32-6621-4bb5-b6c7-1b929414dd8c/pull/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.355846 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5_16b53c32-6621-4bb5-b6c7-1b929414dd8c/pull/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.382434 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5_16b53c32-6621-4bb5-b6c7-1b929414dd8c/util/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.512953 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5_16b53c32-6621-4bb5-b6c7-1b929414dd8c/util/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.570990 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5_16b53c32-6621-4bb5-b6c7-1b929414dd8c/extract/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.578503 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71352gt5_16b53c32-6621-4bb5-b6c7-1b929414dd8c/pull/0.log" Jan 27 19:45:57 crc kubenswrapper[5049]: I0127 19:45:57.750173 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwv44_466244fe-7514-4967-90d8-2c722168a89f/extract-utilities/0.log" Jan 27 19:45:58 crc kubenswrapper[5049]: I0127 19:45:58.486635 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwv44_466244fe-7514-4967-90d8-2c722168a89f/extract-utilities/0.log" Jan 27 19:45:58 crc kubenswrapper[5049]: I0127 19:45:58.578251 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwv44_466244fe-7514-4967-90d8-2c722168a89f/extract-utilities/0.log" Jan 27 19:45:58 crc kubenswrapper[5049]: I0127 19:45:58.580567 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwv44_466244fe-7514-4967-90d8-2c722168a89f/extract-content/0.log" Jan 27 19:45:58 crc kubenswrapper[5049]: I0127 19:45:58.583173 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwv44_466244fe-7514-4967-90d8-2c722168a89f/extract-content/0.log" Jan 27 19:45:58 crc kubenswrapper[5049]: I0127 19:45:58.687786 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwv44_466244fe-7514-4967-90d8-2c722168a89f/extract-content/0.log" Jan 27 19:45:58 crc kubenswrapper[5049]: I0127 19:45:58.797338 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s6879_cc0438a5-9988-47c7-a925-5a22a6d6a1d4/extract-utilities/0.log" Jan 27 19:45:58 crc kubenswrapper[5049]: I0127 19:45:58.993931 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s6879_cc0438a5-9988-47c7-a925-5a22a6d6a1d4/extract-utilities/0.log" Jan 27 19:45:59 crc kubenswrapper[5049]: I0127 19:45:59.085947 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s6879_cc0438a5-9988-47c7-a925-5a22a6d6a1d4/extract-content/0.log" Jan 27 19:45:59 crc kubenswrapper[5049]: I0127 19:45:59.104821 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s6879_cc0438a5-9988-47c7-a925-5a22a6d6a1d4/extract-content/0.log" Jan 27 19:45:59 crc kubenswrapper[5049]: I0127 19:45:59.272688 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s6879_cc0438a5-9988-47c7-a925-5a22a6d6a1d4/extract-utilities/0.log" Jan 27 19:45:59 crc kubenswrapper[5049]: I0127 19:45:59.354806 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s6879_cc0438a5-9988-47c7-a925-5a22a6d6a1d4/extract-content/0.log" Jan 27 19:45:59 crc kubenswrapper[5049]: I0127 19:45:59.582508 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-s6879_cc0438a5-9988-47c7-a925-5a22a6d6a1d4/registry-server/0.log" Jan 27 19:45:59 crc kubenswrapper[5049]: I0127 19:45:59.584800 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rdlx6_4ded2900-908e-416c-9028-cfb5926b7ad5/marketplace-operator/0.log" Jan 27 19:45:59 crc kubenswrapper[5049]: I0127 19:45:59.746618 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z4czz_63202912-fe76-4c4c-84c6-2f8073537a86/extract-utilities/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.000781 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z4czz_63202912-fe76-4c4c-84c6-2f8073537a86/extract-content/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.035416 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z4czz_63202912-fe76-4c4c-84c6-2f8073537a86/extract-content/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.041694 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z4czz_63202912-fe76-4c4c-84c6-2f8073537a86/extract-utilities/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.196376 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z4czz_63202912-fe76-4c4c-84c6-2f8073537a86/extract-content/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.244765 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z4czz_63202912-fe76-4c4c-84c6-2f8073537a86/extract-utilities/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.447758 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kwv44_466244fe-7514-4967-90d8-2c722168a89f/registry-server/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.450274 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2m7pb_82af32d9-43e8-4416-aab2-8107103cc7ff/extract-utilities/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.633524 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z4czz_63202912-fe76-4c4c-84c6-2f8073537a86/registry-server/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.674780 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2m7pb_82af32d9-43e8-4416-aab2-8107103cc7ff/extract-utilities/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.683377 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2m7pb_82af32d9-43e8-4416-aab2-8107103cc7ff/extract-content/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.790271 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2m7pb_82af32d9-43e8-4416-aab2-8107103cc7ff/extract-content/0.log" Jan 27 19:46:00 crc kubenswrapper[5049]: I0127 19:46:00.994300 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2m7pb_82af32d9-43e8-4416-aab2-8107103cc7ff/extract-utilities/0.log" Jan 27 19:46:01 crc kubenswrapper[5049]: I0127 19:46:01.023083 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2m7pb_82af32d9-43e8-4416-aab2-8107103cc7ff/extract-content/0.log" Jan 27 19:46:01 crc kubenswrapper[5049]: I0127 19:46:01.801802 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2m7pb_82af32d9-43e8-4416-aab2-8107103cc7ff/registry-server/0.log" Jan 27 19:46:21 crc kubenswrapper[5049]: E0127 19:46:21.937536 5049 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.20:54424->38.102.83.20:40557: read tcp 38.102.83.20:54424->38.102.83.20:40557: read: connection reset by peer Jan 27 19:47:51 crc kubenswrapper[5049]: I0127 19:47:51.234547 5049 generic.go:334] "Generic (PLEG): container finished" podID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerID="f8499898143ee0eafcbd7bae630105dbc251feef23b27faf66e377d56d772b16" exitCode=0 Jan 27 19:47:51 crc kubenswrapper[5049]: I0127 19:47:51.234636 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mtjn2/must-gather-sjljc" event={"ID":"629a46ea-c5f0-485b-9937-50eaca5ed965","Type":"ContainerDied","Data":"f8499898143ee0eafcbd7bae630105dbc251feef23b27faf66e377d56d772b16"} Jan 27 19:47:51 crc kubenswrapper[5049]: I0127 19:47:51.235834 5049 scope.go:117] "RemoveContainer" containerID="f8499898143ee0eafcbd7bae630105dbc251feef23b27faf66e377d56d772b16" Jan 27 19:47:51 crc kubenswrapper[5049]: I0127 19:47:51.306869 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mtjn2_must-gather-sjljc_629a46ea-c5f0-485b-9937-50eaca5ed965/gather/0.log" Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.269092 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mtjn2/must-gather-sjljc"] Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.271543 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mtjn2/must-gather-sjljc" podUID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerName="copy" containerID="cri-o://0fd805b719c3109458d87288e5f1fbe4fa4531134077e0f28e64e31a951d4336" gracePeriod=2 Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.279896 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mtjn2/must-gather-sjljc"] Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.562705 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mtjn2_must-gather-sjljc_629a46ea-c5f0-485b-9937-50eaca5ed965/copy/0.log" Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.565530 5049 generic.go:334] "Generic (PLEG): container finished" podID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerID="0fd805b719c3109458d87288e5f1fbe4fa4531134077e0f28e64e31a951d4336" exitCode=143 Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.776865 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mtjn2_must-gather-sjljc_629a46ea-c5f0-485b-9937-50eaca5ed965/copy/0.log" Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.777409 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.955239 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc6fr\" (UniqueName: \"kubernetes.io/projected/629a46ea-c5f0-485b-9937-50eaca5ed965-kube-api-access-zc6fr\") pod \"629a46ea-c5f0-485b-9937-50eaca5ed965\" (UID: \"629a46ea-c5f0-485b-9937-50eaca5ed965\") " Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.955301 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/629a46ea-c5f0-485b-9937-50eaca5ed965-must-gather-output\") pod \"629a46ea-c5f0-485b-9937-50eaca5ed965\" (UID: \"629a46ea-c5f0-485b-9937-50eaca5ed965\") " Jan 27 19:48:00 crc kubenswrapper[5049]: I0127 19:48:00.963108 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629a46ea-c5f0-485b-9937-50eaca5ed965-kube-api-access-zc6fr" (OuterVolumeSpecName: "kube-api-access-zc6fr") pod "629a46ea-c5f0-485b-9937-50eaca5ed965" (UID: "629a46ea-c5f0-485b-9937-50eaca5ed965"). InnerVolumeSpecName "kube-api-access-zc6fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:48:01 crc kubenswrapper[5049]: I0127 19:48:01.057642 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc6fr\" (UniqueName: \"kubernetes.io/projected/629a46ea-c5f0-485b-9937-50eaca5ed965-kube-api-access-zc6fr\") on node \"crc\" DevicePath \"\"" Jan 27 19:48:01 crc kubenswrapper[5049]: I0127 19:48:01.104806 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629a46ea-c5f0-485b-9937-50eaca5ed965-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "629a46ea-c5f0-485b-9937-50eaca5ed965" (UID: "629a46ea-c5f0-485b-9937-50eaca5ed965"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:48:01 crc kubenswrapper[5049]: I0127 19:48:01.159068 5049 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/629a46ea-c5f0-485b-9937-50eaca5ed965-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 19:48:01 crc kubenswrapper[5049]: I0127 19:48:01.574232 5049 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mtjn2_must-gather-sjljc_629a46ea-c5f0-485b-9937-50eaca5ed965/copy/0.log" Jan 27 19:48:01 crc kubenswrapper[5049]: I0127 19:48:01.574573 5049 scope.go:117] "RemoveContainer" containerID="0fd805b719c3109458d87288e5f1fbe4fa4531134077e0f28e64e31a951d4336" Jan 27 19:48:01 crc kubenswrapper[5049]: I0127 19:48:01.574770 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mtjn2/must-gather-sjljc" Jan 27 19:48:01 crc kubenswrapper[5049]: I0127 19:48:01.596811 5049 scope.go:117] "RemoveContainer" containerID="f8499898143ee0eafcbd7bae630105dbc251feef23b27faf66e377d56d772b16" Jan 27 19:48:01 crc kubenswrapper[5049]: I0127 19:48:01.667919 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629a46ea-c5f0-485b-9937-50eaca5ed965" path="/var/lib/kubelet/pods/629a46ea-c5f0-485b-9937-50eaca5ed965/volumes" Jan 27 19:48:17 crc kubenswrapper[5049]: I0127 19:48:17.782259 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:48:17 crc kubenswrapper[5049]: I0127 19:48:17.782846 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:48:47 crc kubenswrapper[5049]: I0127 19:48:47.781260 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:48:47 crc kubenswrapper[5049]: I0127 19:48:47.781972 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:49:17 crc kubenswrapper[5049]: I0127 19:49:17.781226 5049 patch_prober.go:28] interesting pod/machine-config-daemon-2d7n9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 19:49:17 crc kubenswrapper[5049]: I0127 19:49:17.781806 5049 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 19:49:17 crc kubenswrapper[5049]: I0127 19:49:17.781865 5049 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" Jan 27 19:49:17 crc kubenswrapper[5049]: I0127 19:49:17.782748 5049 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb"} pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 19:49:17 crc kubenswrapper[5049]: I0127 19:49:17.782815 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerName="machine-config-daemon" containerID="cri-o://7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" gracePeriod=600 Jan 27 19:49:17 crc kubenswrapper[5049]: E0127 19:49:17.923285 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:49:18 crc kubenswrapper[5049]: I0127 19:49:18.260396 5049 generic.go:334] "Generic (PLEG): container finished" podID="b714597d-68b8-4f8f-9d55-9f1cea23324a" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" exitCode=0 Jan 27 19:49:18 crc kubenswrapper[5049]: I0127 19:49:18.260451 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" event={"ID":"b714597d-68b8-4f8f-9d55-9f1cea23324a","Type":"ContainerDied","Data":"7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb"} Jan 27 19:49:18 crc kubenswrapper[5049]: I0127 19:49:18.260899 5049 scope.go:117] "RemoveContainer" containerID="a239abd78b727996fe25828bb1f7bd7d0fd88569e96b06f68db274a7a474a4a4" Jan 27 19:49:18 crc kubenswrapper[5049]: I0127 19:49:18.261905 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:49:18 crc kubenswrapper[5049]: E0127 19:49:18.262288 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.804351 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqsj"] Jan 27 19:49:19 crc kubenswrapper[5049]: E0127 19:49:19.806307 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0" containerName="collect-profiles" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.806429 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0" containerName="collect-profiles" Jan 27 19:49:19 crc kubenswrapper[5049]: E0127 19:49:19.806528 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerName="copy" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.806610 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerName="copy" Jan 27 19:49:19 crc kubenswrapper[5049]: E0127 19:49:19.806720 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerName="gather" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.806804 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerName="gather" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.807171 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="a40e5f97-fb2b-431a-9fa9-d2e7f84d85e0" containerName="collect-profiles" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.807273 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerName="gather" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.807398 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="629a46ea-c5f0-485b-9937-50eaca5ed965" containerName="copy" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.809368 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.820186 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqsj"] Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.840187 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-catalog-content\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.840247 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tksf5\" (UniqueName: \"kubernetes.io/projected/1309ff09-7102-4d4f-98fe-da86b7aabb46-kube-api-access-tksf5\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.840289 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-utilities\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.942691 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-catalog-content\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.942766 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tksf5\" (UniqueName: \"kubernetes.io/projected/1309ff09-7102-4d4f-98fe-da86b7aabb46-kube-api-access-tksf5\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.942833 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-utilities\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.943315 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-utilities\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.943568 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-catalog-content\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:19 crc kubenswrapper[5049]: I0127 19:49:19.970911 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tksf5\" (UniqueName: \"kubernetes.io/projected/1309ff09-7102-4d4f-98fe-da86b7aabb46-kube-api-access-tksf5\") pod \"redhat-marketplace-rgqsj\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:20 crc kubenswrapper[5049]: I0127 19:49:20.176824 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:20 crc kubenswrapper[5049]: I0127 19:49:20.654380 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqsj"] Jan 27 19:49:21 crc kubenswrapper[5049]: I0127 19:49:21.292448 5049 generic.go:334] "Generic (PLEG): container finished" podID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerID="edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966" exitCode=0 Jan 27 19:49:21 crc kubenswrapper[5049]: I0127 19:49:21.292521 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqsj" event={"ID":"1309ff09-7102-4d4f-98fe-da86b7aabb46","Type":"ContainerDied","Data":"edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966"} Jan 27 19:49:21 crc kubenswrapper[5049]: I0127 19:49:21.292988 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqsj" event={"ID":"1309ff09-7102-4d4f-98fe-da86b7aabb46","Type":"ContainerStarted","Data":"b785c2360736bbdb967ecda689681abc22c316efbf5d8f702fb3a2c6a2b86b47"} Jan 27 19:49:21 crc kubenswrapper[5049]: I0127 19:49:21.294989 5049 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 19:49:21 crc kubenswrapper[5049]: I0127 19:49:21.592629 5049 scope.go:117] "RemoveContainer" containerID="6ea4ef9dec070e37e24999512142491c880bba70dd78f7d0e7673bf1189f06be" Jan 27 19:49:25 crc kubenswrapper[5049]: I0127 19:49:25.365247 5049 generic.go:334] "Generic (PLEG): container finished" podID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerID="1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee" exitCode=0 Jan 27 19:49:25 crc kubenswrapper[5049]: I0127 19:49:25.365302 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqsj" event={"ID":"1309ff09-7102-4d4f-98fe-da86b7aabb46","Type":"ContainerDied","Data":"1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee"} Jan 27 19:49:26 crc kubenswrapper[5049]: I0127 19:49:26.379218 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqsj" event={"ID":"1309ff09-7102-4d4f-98fe-da86b7aabb46","Type":"ContainerStarted","Data":"51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e"} Jan 27 19:49:26 crc kubenswrapper[5049]: I0127 19:49:26.408798 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgqsj" podStartSLOduration=2.7915195329999998 podStartE2EDuration="7.408769557s" podCreationTimestamp="2026-01-27 19:49:19 +0000 UTC" firstStartedPulling="2026-01-27 19:49:21.29457108 +0000 UTC m=+10336.393544629" lastFinishedPulling="2026-01-27 19:49:25.911821064 +0000 UTC m=+10341.010794653" observedRunningTime="2026-01-27 19:49:26.401893451 +0000 UTC m=+10341.500867050" watchObservedRunningTime="2026-01-27 19:49:26.408769557 +0000 UTC m=+10341.507743106" Jan 27 19:49:29 crc kubenswrapper[5049]: I0127 19:49:29.647372 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:49:29 crc kubenswrapper[5049]: E0127 19:49:29.648064 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:49:30 crc kubenswrapper[5049]: I0127 19:49:30.177194 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:30 crc kubenswrapper[5049]: I0127 19:49:30.177287 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:30 crc kubenswrapper[5049]: I0127 19:49:30.224103 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:40 crc kubenswrapper[5049]: I0127 19:49:40.225873 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:40 crc kubenswrapper[5049]: I0127 19:49:40.276135 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqsj"] Jan 27 19:49:40 crc kubenswrapper[5049]: I0127 19:49:40.515972 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgqsj" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerName="registry-server" containerID="cri-o://51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e" gracePeriod=2 Jan 27 19:49:40 crc kubenswrapper[5049]: I0127 19:49:40.646202 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:49:40 crc kubenswrapper[5049]: E0127 19:49:40.646536 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.010652 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.115106 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tksf5\" (UniqueName: \"kubernetes.io/projected/1309ff09-7102-4d4f-98fe-da86b7aabb46-kube-api-access-tksf5\") pod \"1309ff09-7102-4d4f-98fe-da86b7aabb46\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.115347 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-utilities\") pod \"1309ff09-7102-4d4f-98fe-da86b7aabb46\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.115396 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-catalog-content\") pod \"1309ff09-7102-4d4f-98fe-da86b7aabb46\" (UID: \"1309ff09-7102-4d4f-98fe-da86b7aabb46\") " Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.116081 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-utilities" (OuterVolumeSpecName: "utilities") pod "1309ff09-7102-4d4f-98fe-da86b7aabb46" (UID: "1309ff09-7102-4d4f-98fe-da86b7aabb46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.122820 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1309ff09-7102-4d4f-98fe-da86b7aabb46-kube-api-access-tksf5" (OuterVolumeSpecName: "kube-api-access-tksf5") pod "1309ff09-7102-4d4f-98fe-da86b7aabb46" (UID: "1309ff09-7102-4d4f-98fe-da86b7aabb46"). InnerVolumeSpecName "kube-api-access-tksf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.146925 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1309ff09-7102-4d4f-98fe-da86b7aabb46" (UID: "1309ff09-7102-4d4f-98fe-da86b7aabb46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.218111 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.218372 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1309ff09-7102-4d4f-98fe-da86b7aabb46-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.218450 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tksf5\" (UniqueName: \"kubernetes.io/projected/1309ff09-7102-4d4f-98fe-da86b7aabb46-kube-api-access-tksf5\") on node \"crc\" DevicePath \"\"" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.528110 5049 generic.go:334] "Generic (PLEG): container finished" podID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerID="51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e" exitCode=0 Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.528160 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqsj" event={"ID":"1309ff09-7102-4d4f-98fe-da86b7aabb46","Type":"ContainerDied","Data":"51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e"} Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.528199 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqsj" event={"ID":"1309ff09-7102-4d4f-98fe-da86b7aabb46","Type":"ContainerDied","Data":"b785c2360736bbdb967ecda689681abc22c316efbf5d8f702fb3a2c6a2b86b47"} Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.528217 5049 scope.go:117] "RemoveContainer" containerID="51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.528225 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqsj" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.560643 5049 scope.go:117] "RemoveContainer" containerID="1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.571552 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqsj"] Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.580721 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqsj"] Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.600467 5049 scope.go:117] "RemoveContainer" containerID="edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.629912 5049 scope.go:117] "RemoveContainer" containerID="51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e" Jan 27 19:49:41 crc kubenswrapper[5049]: E0127 19:49:41.630998 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e\": container with ID starting with 51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e not found: ID does not exist" containerID="51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.631057 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e"} err="failed to get container status \"51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e\": rpc error: code = NotFound desc = could not find container \"51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e\": container with ID starting with 51165c8c96f7cc1482342a8ac005b977f353848c183bd24d953d78c9bf94a76e not found: ID does not exist" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.631089 5049 scope.go:117] "RemoveContainer" containerID="1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee" Jan 27 19:49:41 crc kubenswrapper[5049]: E0127 19:49:41.631790 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee\": container with ID starting with 1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee not found: ID does not exist" containerID="1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.631819 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee"} err="failed to get container status \"1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee\": rpc error: code = NotFound desc = could not find container \"1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee\": container with ID starting with 1a5b0821fcea1d38fcc4ba85799015f26ff822c6ae80c5af14dd11f41791c9ee not found: ID does not exist" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.631837 5049 scope.go:117] "RemoveContainer" containerID="edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966" Jan 27 19:49:41 crc kubenswrapper[5049]: E0127 19:49:41.632493 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966\": container with ID starting with edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966 not found: ID does not exist" containerID="edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.632526 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966"} err="failed to get container status \"edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966\": rpc error: code = NotFound desc = could not find container \"edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966\": container with ID starting with edcaf59ebf87e54017bcb382582ed464529ce5860fc80d1e619f6902da295966 not found: ID does not exist" Jan 27 19:49:41 crc kubenswrapper[5049]: I0127 19:49:41.659261 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" path="/var/lib/kubelet/pods/1309ff09-7102-4d4f-98fe-da86b7aabb46/volumes" Jan 27 19:49:53 crc kubenswrapper[5049]: I0127 19:49:53.645812 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:49:53 crc kubenswrapper[5049]: E0127 19:49:53.646735 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:50:07 crc kubenswrapper[5049]: I0127 19:50:07.646725 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:50:07 crc kubenswrapper[5049]: E0127 19:50:07.647619 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:50:18 crc kubenswrapper[5049]: I0127 19:50:18.646976 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:50:18 crc kubenswrapper[5049]: E0127 19:50:18.647926 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:50:26 crc kubenswrapper[5049]: I0127 19:50:26.998503 5049 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zpwkw"] Jan 27 19:50:27 crc kubenswrapper[5049]: E0127 19:50:27.000519 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerName="extract-utilities" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.000552 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerName="extract-utilities" Jan 27 19:50:27 crc kubenswrapper[5049]: E0127 19:50:27.000573 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerName="registry-server" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.000581 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerName="registry-server" Jan 27 19:50:27 crc kubenswrapper[5049]: E0127 19:50:27.000610 5049 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerName="extract-content" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.000618 5049 state_mem.go:107] "Deleted CPUSet assignment" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerName="extract-content" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.000882 5049 memory_manager.go:354] "RemoveStaleState removing state" podUID="1309ff09-7102-4d4f-98fe-da86b7aabb46" containerName="registry-server" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.002616 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.025909 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpwkw"] Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.142737 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-catalog-content\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.142820 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvnv\" (UniqueName: \"kubernetes.io/projected/ba97e24a-26b9-411c-9b84-ec0c54843ccf-kube-api-access-tmvnv\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.142927 5049 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-utilities\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.244648 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-utilities\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.245190 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-catalog-content\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.245303 5049 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvnv\" (UniqueName: \"kubernetes.io/projected/ba97e24a-26b9-411c-9b84-ec0c54843ccf-kube-api-access-tmvnv\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.245387 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-utilities\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.246281 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-catalog-content\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.277856 5049 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvnv\" (UniqueName: \"kubernetes.io/projected/ba97e24a-26b9-411c-9b84-ec0c54843ccf-kube-api-access-tmvnv\") pod \"certified-operators-zpwkw\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.325104 5049 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.841821 5049 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpwkw"] Jan 27 19:50:27 crc kubenswrapper[5049]: I0127 19:50:27.953957 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpwkw" event={"ID":"ba97e24a-26b9-411c-9b84-ec0c54843ccf","Type":"ContainerStarted","Data":"8c75ea57347ca3442852669d4d3be246e497def221cef4f9c767c07c17cb5bf5"} Jan 27 19:50:28 crc kubenswrapper[5049]: I0127 19:50:28.966230 5049 generic.go:334] "Generic (PLEG): container finished" podID="ba97e24a-26b9-411c-9b84-ec0c54843ccf" containerID="28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2" exitCode=0 Jan 27 19:50:28 crc kubenswrapper[5049]: I0127 19:50:28.966595 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpwkw" event={"ID":"ba97e24a-26b9-411c-9b84-ec0c54843ccf","Type":"ContainerDied","Data":"28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2"} Jan 27 19:50:29 crc kubenswrapper[5049]: I0127 19:50:29.646700 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:50:29 crc kubenswrapper[5049]: E0127 19:50:29.647143 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:50:31 crc kubenswrapper[5049]: I0127 19:50:31.995669 5049 generic.go:334] "Generic (PLEG): container finished" podID="ba97e24a-26b9-411c-9b84-ec0c54843ccf" containerID="ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e" exitCode=0 Jan 27 19:50:31 crc kubenswrapper[5049]: I0127 19:50:31.995722 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpwkw" event={"ID":"ba97e24a-26b9-411c-9b84-ec0c54843ccf","Type":"ContainerDied","Data":"ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e"} Jan 27 19:50:34 crc kubenswrapper[5049]: I0127 19:50:34.011479 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpwkw" event={"ID":"ba97e24a-26b9-411c-9b84-ec0c54843ccf","Type":"ContainerStarted","Data":"aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6"} Jan 27 19:50:34 crc kubenswrapper[5049]: I0127 19:50:34.034654 5049 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zpwkw" podStartSLOduration=3.969258269 podStartE2EDuration="8.034639566s" podCreationTimestamp="2026-01-27 19:50:26 +0000 UTC" firstStartedPulling="2026-01-27 19:50:28.969005933 +0000 UTC m=+10404.067979502" lastFinishedPulling="2026-01-27 19:50:33.03438725 +0000 UTC m=+10408.133360799" observedRunningTime="2026-01-27 19:50:34.030426856 +0000 UTC m=+10409.129400405" watchObservedRunningTime="2026-01-27 19:50:34.034639566 +0000 UTC m=+10409.133613115" Jan 27 19:50:37 crc kubenswrapper[5049]: I0127 19:50:37.325913 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:37 crc kubenswrapper[5049]: I0127 19:50:37.327288 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:37 crc kubenswrapper[5049]: I0127 19:50:37.382110 5049 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:38 crc kubenswrapper[5049]: I0127 19:50:38.094721 5049 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:38 crc kubenswrapper[5049]: I0127 19:50:38.161889 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpwkw"] Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.063069 5049 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zpwkw" podUID="ba97e24a-26b9-411c-9b84-ec0c54843ccf" containerName="registry-server" containerID="cri-o://aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6" gracePeriod=2 Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.584159 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.752249 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-utilities\") pod \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.752293 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmvnv\" (UniqueName: \"kubernetes.io/projected/ba97e24a-26b9-411c-9b84-ec0c54843ccf-kube-api-access-tmvnv\") pod \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.752348 5049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-catalog-content\") pod \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\" (UID: \"ba97e24a-26b9-411c-9b84-ec0c54843ccf\") " Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.753935 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-utilities" (OuterVolumeSpecName: "utilities") pod "ba97e24a-26b9-411c-9b84-ec0c54843ccf" (UID: "ba97e24a-26b9-411c-9b84-ec0c54843ccf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.762536 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba97e24a-26b9-411c-9b84-ec0c54843ccf-kube-api-access-tmvnv" (OuterVolumeSpecName: "kube-api-access-tmvnv") pod "ba97e24a-26b9-411c-9b84-ec0c54843ccf" (UID: "ba97e24a-26b9-411c-9b84-ec0c54843ccf"). InnerVolumeSpecName "kube-api-access-tmvnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.822302 5049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba97e24a-26b9-411c-9b84-ec0c54843ccf" (UID: "ba97e24a-26b9-411c-9b84-ec0c54843ccf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.854287 5049 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.854325 5049 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmvnv\" (UniqueName: \"kubernetes.io/projected/ba97e24a-26b9-411c-9b84-ec0c54843ccf-kube-api-access-tmvnv\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:40 crc kubenswrapper[5049]: I0127 19:50:40.854337 5049 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba97e24a-26b9-411c-9b84-ec0c54843ccf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.079430 5049 generic.go:334] "Generic (PLEG): container finished" podID="ba97e24a-26b9-411c-9b84-ec0c54843ccf" containerID="aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6" exitCode=0 Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.079475 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpwkw" event={"ID":"ba97e24a-26b9-411c-9b84-ec0c54843ccf","Type":"ContainerDied","Data":"aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6"} Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.079516 5049 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpwkw" event={"ID":"ba97e24a-26b9-411c-9b84-ec0c54843ccf","Type":"ContainerDied","Data":"8c75ea57347ca3442852669d4d3be246e497def221cef4f9c767c07c17cb5bf5"} Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.079534 5049 scope.go:117] "RemoveContainer" containerID="aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.079524 5049 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpwkw" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.117828 5049 scope.go:117] "RemoveContainer" containerID="ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.131113 5049 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpwkw"] Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.146037 5049 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zpwkw"] Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.160370 5049 scope.go:117] "RemoveContainer" containerID="28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.211050 5049 scope.go:117] "RemoveContainer" containerID="aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6" Jan 27 19:50:41 crc kubenswrapper[5049]: E0127 19:50:41.212126 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6\": container with ID starting with aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6 not found: ID does not exist" containerID="aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.212168 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6"} err="failed to get container status \"aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6\": rpc error: code = NotFound desc = could not find container \"aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6\": container with ID starting with aebc8a477cd5cd132e5b6506b62ab9f9791c360e62f74b7ca134c173378791c6 not found: ID does not exist" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.212210 5049 scope.go:117] "RemoveContainer" containerID="ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e" Jan 27 19:50:41 crc kubenswrapper[5049]: E0127 19:50:41.214170 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e\": container with ID starting with ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e not found: ID does not exist" containerID="ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.214208 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e"} err="failed to get container status \"ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e\": rpc error: code = NotFound desc = could not find container \"ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e\": container with ID starting with ff2c256ce0437ede6ba8044970a54cd353380b8ddf9d8eaa3a10af019cd91c6e not found: ID does not exist" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.214232 5049 scope.go:117] "RemoveContainer" containerID="28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2" Jan 27 19:50:41 crc kubenswrapper[5049]: E0127 19:50:41.214773 5049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2\": container with ID starting with 28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2 not found: ID does not exist" containerID="28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.214827 5049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2"} err="failed to get container status \"28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2\": rpc error: code = NotFound desc = could not find container \"28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2\": container with ID starting with 28fb7cdb4854b8b59c2cee072b8b82b238afacf040785a8ed4315fcd4af4b1c2 not found: ID does not exist" Jan 27 19:50:41 crc kubenswrapper[5049]: I0127 19:50:41.657423 5049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba97e24a-26b9-411c-9b84-ec0c54843ccf" path="/var/lib/kubelet/pods/ba97e24a-26b9-411c-9b84-ec0c54843ccf/volumes" Jan 27 19:50:42 crc kubenswrapper[5049]: I0127 19:50:42.646915 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:50:42 crc kubenswrapper[5049]: E0127 19:50:42.647454 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:50:57 crc kubenswrapper[5049]: I0127 19:50:57.647552 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:50:57 crc kubenswrapper[5049]: E0127 19:50:57.649093 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a" Jan 27 19:51:10 crc kubenswrapper[5049]: I0127 19:51:10.646973 5049 scope.go:117] "RemoveContainer" containerID="7a72a7c62c02bb0a5b1f3663642b584cd204aea9d62e0becd192e0e208acf0bb" Jan 27 19:51:10 crc kubenswrapper[5049]: E0127 19:51:10.647946 5049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2d7n9_openshift-machine-config-operator(b714597d-68b8-4f8f-9d55-9f1cea23324a)\"" pod="openshift-machine-config-operator/machine-config-daemon-2d7n9" podUID="b714597d-68b8-4f8f-9d55-9f1cea23324a"